{"__v":1,"_id":"57db128c5056c819009fffbb","api":{"results":{"codes":[]},"settings":"","auth":"required","params":[],"url":""},"body":"Artificial Intelligence (AI) is already part of our lives. Whenever you pick up your smartphone, you’re already seeing what AI can do for you, from tailored recommendations to relevant search results. With Einstein Vision, developers can harness the power of image recognition to build AI-powered apps fast. All without a data science degree!\n\nEinstein Vision is part of the Einstein suite of technologies, and you can use it to AI-enable your apps. Leverage pre-trained classifiers, or train your own custom classifiers to solve a vast array of specialized image-recognition use cases. Developers can bring the power of image recognition to CRM and third-party applications so that end users across sales, service, and marketing can discover new insights about their customers and predict outcomes that lead to smarter decisions.\n\n<sub>Rights of ALBERT EINSTEIN are used with permission of The Hebrew University of Jerusalem. Represented exclusively by Greenlight.</sub>","category":"57db122d5056c819009fffb8","createdAt":"2016-09-15T21:28:44.503Z","excerpt":"","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":0,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"introduction-to-the-einstein-predictive-vision-service","sync_unique":"","title":"Introduction to Salesforce Einstein Vision","type":"basic","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

Introduction to Salesforce Einstein Vision


Artificial Intelligence (AI) is already part of our lives. Whenever you pick up your smartphone, you’re already seeing what AI can do for you, from tailored recommendations to relevant search results. With Einstein Vision, developers can harness the power of image recognition to build AI-powered apps fast. All without a data science degree! Einstein Vision is part of the Einstein suite of technologies, and you can use it to AI-enable your apps. Leverage pre-trained classifiers, or train your own custom classifiers to solve a vast array of specialized image-recognition use cases. Developers can bring the power of image recognition to CRM and third-party applications so that end users across sales, service, and marketing can discover new insights about their customers and predict outcomes that lead to smarter decisions. <sub>Rights of ALBERT EINSTEIN are used with permission of The Hebrew University of Jerusalem. Represented exclusively by Greenlight.</sub>
Artificial Intelligence (AI) is already part of our lives. Whenever you pick up your smartphone, you’re already seeing what AI can do for you, from tailored recommendations to relevant search results. With Einstein Vision, developers can harness the power of image recognition to build AI-powered apps fast. All without a data science degree! Einstein Vision is part of the Einstein suite of technologies, and you can use it to AI-enable your apps. Leverage pre-trained classifiers, or train your own custom classifiers to solve a vast array of specialized image-recognition use cases. Developers can bring the power of image recognition to CRM and third-party applications so that end users across sales, service, and marketing can discover new insights about their customers and predict outcomes that lead to smarter decisions. <sub>Rights of ALBERT EINSTEIN are used with permission of The Hebrew University of Jerusalem. Represented exclusively by Greenlight.</sub>
{"__v":1,"_id":"57db1540c2a3a434005f7242","api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"body":"The Einstein Vision API enables you to tap into the power of AI and train deep learning models to recognize and classify images at scale. You can use pre-trained classifiers or train your own custom classifiers to solve unique use cases.\n \nFor example, Salesforce Social Studio integrates with this service to expand a marketer’s view beyond just keyword listening. You can “visually listen” to detect attributes about an image, such as detecting your brand logo or that of your competitor in a customer’s photo. You can use these attributes to learn more about your customers' lifestyles and preferences.\n \nImages contain contextual clues about all aspects of your business, including your customers’ preferences, your inventory levels, and the quality of your products. You can use these clues to enrich what you know about your sales, service, and marketing efforts to gain new insights about your customers and take action. The possibilities are limitless with applications that include:\n\n- Visual search—Expand the ways that your customers can discover your products and increase sales.\n - Provide customers with visual filters to find products that best match their preferences while browsing online.\n - Allow customers to take photos of your products to discover where they can make purchases online or in-store.\n\n\n- Brand detection—Monitor your brand across all your channels to increase your marketing reach and preserve brand integrity.\n - Better understand customer preferences and lifestyle through their social media images.\n -  Monitor user-generated images through communities and review boards to improve products and quality of service.\n - Evaluate banner advertisement exposure during broadcast events to drive higher ROI.\n\n\n- Product identification—Increase the ways that you can identify your products to streamline sales processes and customer service.\n - Identify product issues before sending out a field technician to increase case resolution time.\n - Discover which products are out of stock or misplaced to streamline inventory restocking.\n - Measure retail shelf-share to optimize product mix and represent top-selling products among competitors.\n\n\n#Deep Learning in a Nutshell#\nDeep learning is a branch of machine learning, so let’s first define that term. Machine learning is a type of AI that provides computers with the ability to learn without being explicitly programmed. Machine learning algorithms can tell you something interesting about a set of data without writing custom code specific to a problem. Instead, you feed data to generic algorithms, and these algorithms build their own logic as it relates to the patterns within the data.\n\nIn deep learning, you create and train a neural network in a specific way. A neural network is a set of algorithms designed to recognize patterns. In deep learning, the neural network has multiple layers. At the top layer, the network trains on a specific set of features and then sends that information to the next layer. The network takes that information, combines it with other features and passes it to the next layer, and so on. \n\nDeep learning has increased in popularity because it has proven to outperform other methodologies for machine learning. Due to the advancement of distributed compute resources and businesses generating an influx of image, text, and voice data, deep learning can deliver insights that weren’t previously possible.","category":"57db122d5056c819009fffb8","createdAt":"2016-09-15T21:40:16.560Z","excerpt":"","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":1,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"what-is-the-predictive-vision-service","sync_unique":"","title":"What is Einstein Vision?","type":"basic","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

What is Einstein Vision?


The Einstein Vision API enables you to tap into the power of AI and train deep learning models to recognize and classify images at scale. You can use pre-trained classifiers or train your own custom classifiers to solve unique use cases. For example, Salesforce Social Studio integrates with this service to expand a marketer’s view beyond just keyword listening. You can “visually listen” to detect attributes about an image, such as detecting your brand logo or that of your competitor in a customer’s photo. You can use these attributes to learn more about your customers' lifestyles and preferences. Images contain contextual clues about all aspects of your business, including your customers’ preferences, your inventory levels, and the quality of your products. You can use these clues to enrich what you know about your sales, service, and marketing efforts to gain new insights about your customers and take action. The possibilities are limitless with applications that include: - Visual search—Expand the ways that your customers can discover your products and increase sales. - Provide customers with visual filters to find products that best match their preferences while browsing online. - Allow customers to take photos of your products to discover where they can make purchases online or in-store. - Brand detection—Monitor your brand across all your channels to increase your marketing reach and preserve brand integrity. - Better understand customer preferences and lifestyle through their social media images. - Monitor user-generated images through communities and review boards to improve products and quality of service. - Evaluate banner advertisement exposure during broadcast events to drive higher ROI. - Product identification—Increase the ways that you can identify your products to streamline sales processes and customer service. - Identify product issues before sending out a field technician to increase case resolution time. - Discover which products are out of stock or misplaced to streamline inventory restocking. - Measure retail shelf-share to optimize product mix and represent top-selling products among competitors. #Deep Learning in a Nutshell# Deep learning is a branch of machine learning, so let’s first define that term. Machine learning is a type of AI that provides computers with the ability to learn without being explicitly programmed. Machine learning algorithms can tell you something interesting about a set of data without writing custom code specific to a problem. Instead, you feed data to generic algorithms, and these algorithms build their own logic as it relates to the patterns within the data. In deep learning, you create and train a neural network in a specific way. A neural network is a set of algorithms designed to recognize patterns. In deep learning, the neural network has multiple layers. At the top layer, the network trains on a specific set of features and then sends that information to the next layer. The network takes that information, combines it with other features and passes it to the next layer, and so on. Deep learning has increased in popularity because it has proven to outperform other methodologies for machine learning. Due to the advancement of distributed compute resources and businesses generating an influx of image, text, and voice data, deep learning can deliver insights that weren’t previously possible.
The Einstein Vision API enables you to tap into the power of AI and train deep learning models to recognize and classify images at scale. You can use pre-trained classifiers or train your own custom classifiers to solve unique use cases. For example, Salesforce Social Studio integrates with this service to expand a marketer’s view beyond just keyword listening. You can “visually listen” to detect attributes about an image, such as detecting your brand logo or that of your competitor in a customer’s photo. You can use these attributes to learn more about your customers' lifestyles and preferences. Images contain contextual clues about all aspects of your business, including your customers’ preferences, your inventory levels, and the quality of your products. You can use these clues to enrich what you know about your sales, service, and marketing efforts to gain new insights about your customers and take action. The possibilities are limitless with applications that include: - Visual search—Expand the ways that your customers can discover your products and increase sales. - Provide customers with visual filters to find products that best match their preferences while browsing online. - Allow customers to take photos of your products to discover where they can make purchases online or in-store. - Brand detection—Monitor your brand across all your channels to increase your marketing reach and preserve brand integrity. - Better understand customer preferences and lifestyle through their social media images. - Monitor user-generated images through communities and review boards to improve products and quality of service. - Evaluate banner advertisement exposure during broadcast events to drive higher ROI. - Product identification—Increase the ways that you can identify your products to streamline sales processes and customer service. - Identify product issues before sending out a field technician to increase case resolution time. - Discover which products are out of stock or misplaced to streamline inventory restocking. - Measure retail shelf-share to optimize product mix and represent top-selling products among competitors. #Deep Learning in a Nutshell# Deep learning is a branch of machine learning, so let’s first define that term. Machine learning is a type of AI that provides computers with the ability to learn without being explicitly programmed. Machine learning algorithms can tell you something interesting about a set of data without writing custom code specific to a problem. Instead, you feed data to generic algorithms, and these algorithms build their own logic as it relates to the patterns within the data. In deep learning, you create and train a neural network in a specific way. A neural network is a set of algorithms designed to recognize patterns. In deep learning, the neural network has multiple layers. At the top layer, the network trains on a specific set of features and then sends that information to the next layer. The network takes that information, combines it with other features and passes it to the next layer, and so on. Deep learning has increased in popularity because it has proven to outperform other methodologies for machine learning. Due to the advancement of distributed compute resources and businesses generating an influx of image, text, and voice data, deep learning can deliver insights that weren’t previously possible.
{"__v":2,"_id":"57db17575641201900b35ba9","api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"body":"We’re now in the world of AI and deep learning, and this space has lots of new terms to become familiar with. Understanding these terms and how they relate to each other makes it easier to work with Einstein Vision.\n\n- **Dataset**—The training data, which consists of inputs and outputs. Training the dataset creates the model used to make predictions. For an image recognition problem, the image examples you provide train the model on the desired output labels that you want the model to predict. For example, in the Create a Custom Classifier scenario, we create a model named Beach and Mountain Model from a binary training dataset consisting of two labels: Beaches (images of beach scenes) and Mountains (images of mountain scenes). A multi-label dataset contains three or more labels.\n\n- **Label**—A group of similar data inputs in a dataset that your model is trained to recognize. A label references the output name you want your model to predict. For example, for our Beach and Mountain model, the training data contains images of beaches and that label is  “Beaches.” Images of mountains have a label of “Mountains.” The food classifier, which is trained from a multi-label dataset, contains labels like chocolate cake, pasta, macaroons, and so on.\n\n- **Model**—A machine learning construct used to solve a classification problem. Developers design a classification model by creating a dataset and then defining labels and providing positive examples of inputs that belong to these labels. When you train the dataset, the system then determines the commonalities and differences between the various labels to generalize the characteristics that define each label. The model predicts which class a new input falls into based on the predefined classes specified in your training dataset.\n \n- **Training**—The process through which a model is created and learns the classification rules based on a given set of training inputs (dataset).\n\n- **Prediction**—The results that the model returns as to how closely the input matches data in the dataset.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/e8fe5c5-metamind_ds_updates_df_docs.png\",\n        \"metamind_ds_updates_df_docs.png\",\n        1000,\n        500,\n        \"#14abdb\"\n      ]\n    }\n  ]\n}\n[/block]","category":"57db122d5056c819009fffb8","createdAt":"2016-09-15T21:49:11.143Z","excerpt":"","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":2,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"predictive-vision-service-terminology","sync_unique":"","title":"Einstein Vision Terminology","type":"basic","updates":["580664694ea93f3700b5f1ab"],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

Einstein Vision Terminology


We’re now in the world of AI and deep learning, and this space has lots of new terms to become familiar with. Understanding these terms and how they relate to each other makes it easier to work with Einstein Vision. - **Dataset**—The training data, which consists of inputs and outputs. Training the dataset creates the model used to make predictions. For an image recognition problem, the image examples you provide train the model on the desired output labels that you want the model to predict. For example, in the Create a Custom Classifier scenario, we create a model named Beach and Mountain Model from a binary training dataset consisting of two labels: Beaches (images of beach scenes) and Mountains (images of mountain scenes). A multi-label dataset contains three or more labels. - **Label**—A group of similar data inputs in a dataset that your model is trained to recognize. A label references the output name you want your model to predict. For example, for our Beach and Mountain model, the training data contains images of beaches and that label is “Beaches.” Images of mountains have a label of “Mountains.” The food classifier, which is trained from a multi-label dataset, contains labels like chocolate cake, pasta, macaroons, and so on. - **Model**—A machine learning construct used to solve a classification problem. Developers design a classification model by creating a dataset and then defining labels and providing positive examples of inputs that belong to these labels. When you train the dataset, the system then determines the commonalities and differences between the various labels to generalize the characteristics that define each label. The model predicts which class a new input falls into based on the predefined classes specified in your training dataset. - **Training**—The process through which a model is created and learns the classification rules based on a given set of training inputs (dataset). - **Prediction**—The results that the model returns as to how closely the input matches data in the dataset. [block:image] { "images": [ { "image": [ "https://files.readme.io/e8fe5c5-metamind_ds_updates_df_docs.png", "metamind_ds_updates_df_docs.png", 1000, 500, "#14abdb" ] } ] } [/block]
We’re now in the world of AI and deep learning, and this space has lots of new terms to become familiar with. Understanding these terms and how they relate to each other makes it easier to work with Einstein Vision. - **Dataset**—The training data, which consists of inputs and outputs. Training the dataset creates the model used to make predictions. For an image recognition problem, the image examples you provide train the model on the desired output labels that you want the model to predict. For example, in the Create a Custom Classifier scenario, we create a model named Beach and Mountain Model from a binary training dataset consisting of two labels: Beaches (images of beach scenes) and Mountains (images of mountain scenes). A multi-label dataset contains three or more labels. - **Label**—A group of similar data inputs in a dataset that your model is trained to recognize. A label references the output name you want your model to predict. For example, for our Beach and Mountain model, the training data contains images of beaches and that label is “Beaches.” Images of mountains have a label of “Mountains.” The food classifier, which is trained from a multi-label dataset, contains labels like chocolate cake, pasta, macaroons, and so on. - **Model**—A machine learning construct used to solve a classification problem. Developers design a classification model by creating a dataset and then defining labels and providing positive examples of inputs that belong to these labels. When you train the dataset, the system then determines the commonalities and differences between the various labels to generalize the characteristics that define each label. The model predicts which class a new input falls into based on the predefined classes specified in your training dataset. - **Training**—The process through which a model is created and learns the classification rules based on a given set of training inputs (dataset). - **Prediction**—The results that the model returns as to how closely the input matches data in the dataset. [block:image] { "images": [ { "image": [ "https://files.readme.io/e8fe5c5-metamind_ds_updates_df_docs.png", "metamind_ds_updates_df_docs.png", 1000, 500, "#14abdb" ] } ] } [/block]
{"__v":1,"_id":"57fd16faeaa77f19008b8221","api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"body":"- [Get an account](https://metamind.readme.io/docs/what-you-need-to-call-api#section-get-an-einstein-platform-account)\n- [Generate a token](https://metamind.readme.io/docs/what-you-need-to-call-api#section-generate-a-token)\n\n##Get an Einstein Platform Account##\n\n1. From a browser, navigate to the [sign up page](https://api.metamind.io/signup).\n\n2. Click **Sign Up Using Salesforce**.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/b18c17e-sign_up.png\",\n        \"sign_up.png\",\n        450,\n        639,\n        \"#7b6892\"\n      ]\n    }\n  ]\n}\n[/block]\n3. On the Salesforce login page, type your username and password, and click **Log In**.  If you’re already logged in to Salesforce, you won’t see this page and you can skip to Step 4.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/037038d-log_in.png\",\n        \"log_in.png\",\n        439,\n        602,\n        \"#0d84d3\"\n      ]\n    }\n  ]\n}\n[/block]\n4. Click **Allow** so the page can access basic information, such as your email address, and perform requests.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/e6ca8ef-allow_access.png\",\n        \"allow_access.png\",\n        428,\n        485,\n        \"#f3f2f9\"\n      ]\n    }\n  ]\n}\n[/block]\n5. On the activation page, click **Download Key** to save the key locally. The key file is named `einstein_platform.pem`. Make a note of where you save this file because you'll need it to authenticate when you call the API.\n[block:callout]\n{\n  \"type\": \"warning\",\n  \"title\": \"Caution\",\n  \"body\": \"The **Download Key** button is only supported in the most recent version of these browsers: Google Chrome <sup>TM</sup>, Mozilla<sup>®</sup> Firefox<sup>®</sup>, and Apple<sup>®</sup> Safari<sup>®</sup>. If you're using a different browser, you can cut and paste your key into a text file and save it as `einstein_platform.pem`.\"\n}\n[/block]\n##Generate a Token##\n\nEach API call must contain a valid OAuth token in the request header. To generate a token, you create a JWT payload, sign the payload with your private key, and then call the API to get the token. \n\nTo get a token without code, see [Set Up Authorization](doc:set-up-auth). If you're generating a token in code, the sequence of steps is the same, but the details will vary depending on the programming language.","category":"588a717b9864881b00189f6c","createdAt":"2016-10-11T16:44:42.625Z","excerpt":"Before you can access the Einstein Vision API, you first create an account and download your key. Then you use your key to generate an OAuth token.","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":0,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"what-you-need-to-call-api","sync_unique":"","title":"What You Need to Call the API","type":"basic","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

What You Need to Call the API

Before you can access the Einstein Vision API, you first create an account and download your key. Then you use your key to generate an OAuth token.

- [Get an account](https://metamind.readme.io/docs/what-you-need-to-call-api#section-get-an-einstein-platform-account) - [Generate a token](https://metamind.readme.io/docs/what-you-need-to-call-api#section-generate-a-token) ##Get an Einstein Platform Account## 1. From a browser, navigate to the [sign up page](https://api.metamind.io/signup). 2. Click **Sign Up Using Salesforce**. [block:image] { "images": [ { "image": [ "https://files.readme.io/b18c17e-sign_up.png", "sign_up.png", 450, 639, "#7b6892" ] } ] } [/block] 3. On the Salesforce login page, type your username and password, and click **Log In**. If you’re already logged in to Salesforce, you won’t see this page and you can skip to Step 4. [block:image] { "images": [ { "image": [ "https://files.readme.io/037038d-log_in.png", "log_in.png", 439, 602, "#0d84d3" ] } ] } [/block] 4. Click **Allow** so the page can access basic information, such as your email address, and perform requests. [block:image] { "images": [ { "image": [ "https://files.readme.io/e6ca8ef-allow_access.png", "allow_access.png", 428, 485, "#f3f2f9" ] } ] } [/block] 5. On the activation page, click **Download Key** to save the key locally. The key file is named `einstein_platform.pem`. Make a note of where you save this file because you'll need it to authenticate when you call the API. [block:callout] { "type": "warning", "title": "Caution", "body": "The **Download Key** button is only supported in the most recent version of these browsers: Google Chrome <sup>TM</sup>, Mozilla<sup>®</sup> Firefox<sup>®</sup>, and Apple<sup>®</sup> Safari<sup>®</sup>. If you're using a different browser, you can cut and paste your key into a text file and save it as `einstein_platform.pem`." } [/block] ##Generate a Token## Each API call must contain a valid OAuth token in the request header. To generate a token, you create a JWT payload, sign the payload with your private key, and then call the API to get the token. To get a token without code, see [Set Up Authorization](doc:set-up-auth). If you're generating a token in code, the sequence of steps is the same, but the details will vary depending on the programming language.
- [Get an account](https://metamind.readme.io/docs/what-you-need-to-call-api#section-get-an-einstein-platform-account) - [Generate a token](https://metamind.readme.io/docs/what-you-need-to-call-api#section-generate-a-token) ##Get an Einstein Platform Account## 1. From a browser, navigate to the [sign up page](https://api.metamind.io/signup). 2. Click **Sign Up Using Salesforce**. [block:image] { "images": [ { "image": [ "https://files.readme.io/b18c17e-sign_up.png", "sign_up.png", 450, 639, "#7b6892" ] } ] } [/block] 3. On the Salesforce login page, type your username and password, and click **Log In**. If you’re already logged in to Salesforce, you won’t see this page and you can skip to Step 4. [block:image] { "images": [ { "image": [ "https://files.readme.io/037038d-log_in.png", "log_in.png", 439, 602, "#0d84d3" ] } ] } [/block] 4. Click **Allow** so the page can access basic information, such as your email address, and perform requests. [block:image] { "images": [ { "image": [ "https://files.readme.io/e6ca8ef-allow_access.png", "allow_access.png", 428, 485, "#f3f2f9" ] } ] } [/block] 5. On the activation page, click **Download Key** to save the key locally. The key file is named `einstein_platform.pem`. Make a note of where you save this file because you'll need it to authenticate when you call the API. [block:callout] { "type": "warning", "title": "Caution", "body": "The **Download Key** button is only supported in the most recent version of these browsers: Google Chrome <sup>TM</sup>, Mozilla<sup>®</sup> Firefox<sup>®</sup>, and Apple<sup>®</sup> Safari<sup>®</sup>. If you're using a different browser, you can cut and paste your key into a text file and save it as `einstein_platform.pem`." } [/block] ##Generate a Token## Each API call must contain a valid OAuth token in the request header. To generate a token, you create a JWT payload, sign the payload with your private key, and then call the API to get the token. To get a token without code, see [Set Up Authorization](doc:set-up-auth). If you're generating a token in code, the sequence of steps is the same, but the details will vary depending on the programming language.
{"__v":1,"_id":"57ed8c81da9c632b008e66f4","api":{"settings":"","results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"auth":"required","params":[],"url":""},"body":"To help you get up and running quickly, you’ll step through integrating your Salesforce org with the Einstein Vision API. First, you create Apex classes that call the API. Then you create a Visualforce page to tie it all together.\n\nIf you need help as you go through these steps, check out the [Einstein Platform developer forum](https://developer.salesforce.com/forums?communityId=09aF00000004HMGIA2#!/feedtype=RECENT&dc=Predictive_Services&criteria=ALLQUESTIONS) on Salesforce Developers.","category":"57eecc61095dda17004b3bb7","createdAt":"2016-09-29T21:49:53.249Z","excerpt":"","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":0,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"apex_qs_scenario","sync_unique":"","title":"Scenario","type":"basic","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

Scenario


To help you get up and running quickly, you’ll step through integrating your Salesforce org with the Einstein Vision API. First, you create Apex classes that call the API. Then you create a Visualforce page to tie it all together. If you need help as you go through these steps, check out the [Einstein Platform developer forum](https://developer.salesforce.com/forums?communityId=09aF00000004HMGIA2#!/feedtype=RECENT&dc=Predictive_Services&criteria=ALLQUESTIONS) on Salesforce Developers.
To help you get up and running quickly, you’ll step through integrating your Salesforce org with the Einstein Vision API. First, you create Apex classes that call the API. Then you create a Visualforce page to tie it all together. If you need help as you go through these steps, check out the [Einstein Platform developer forum](https://developer.salesforce.com/forums?communityId=09aF00000004HMGIA2#!/feedtype=RECENT&dc=Predictive_Services&criteria=ALLQUESTIONS) on Salesforce Developers.
{"__v":1,"_id":"57ed8d88da9c632b008e66fd","api":{"settings":"","results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"auth":"required","params":[],"url":""},"body":"- **Set up your account**—Follow the steps in [What You Need to Call the API](doc:what-you-need-to-call-api) to set up your Einstein Platform account.\n\n- **Find your key file**—If you've already created an account, locate the `einstein_platform.pem` file that you downloaded (previously named `predictive_services.pem`) as part of that process. This file contains your private key.\n\n- **Install Git**—To get the Visualforce and Apex code, you need Git to clone the repos.","category":"57eecc61095dda17004b3bb7","createdAt":"2016-09-29T21:54:16.224Z","excerpt":"","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":1,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"apex-qs-prereqs","sync_unique":"","title":"Prerequisites","type":"basic","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

Prerequisites


- **Set up your account**—Follow the steps in [What You Need to Call the API](doc:what-you-need-to-call-api) to set up your Einstein Platform account. - **Find your key file**—If you've already created an account, locate the `einstein_platform.pem` file that you downloaded (previously named `predictive_services.pem`) as part of that process. This file contains your private key. - **Install Git**—To get the Visualforce and Apex code, you need Git to clone the repos.
- **Set up your account**—Follow the steps in [What You Need to Call the API](doc:what-you-need-to-call-api) to set up your Einstein Platform account. - **Find your key file**—If you've already created an account, locate the `einstein_platform.pem` file that you downloaded (previously named `predictive_services.pem`) as part of that process. This file contains your private key. - **Install Git**—To get the Visualforce and Apex code, you need Git to clone the repos.
{"__v":0,"_id":"58815763d3a4e40f00438d1d","api":{"settings":"","results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"auth":"required","params":[],"url":""},"body":"1. Log in to Salesforce.\n\n2. Click **Files**. \n\n3. Click **Upload File**. \n\n4. Navigate to the directory where you saved the `einstein_platform.pem` file, select the file, and click Open. You should see the key file in the list of files owned by you.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/4588eea-files_key.png\",\n        \"files_key.png\",\n        937,\n        242,\n        \"#f2f9fa\"\n      ]\n    }\n  ]\n}\n[/block]\n\n[block:callout]\n{\n  \"type\": \"info\",\n  \"title\": \"Tip\",\n  \"body\": \"The key file was previously named `predictive_services.pem`. If you signed up at an earlier time and you can't file your key file, try searching for a file by this name.\"\n}\n[/block]","category":"57eecc61095dda17004b3bb7","createdAt":"2017-01-20T00:18:43.721Z","excerpt":"You must upload your key to Salesforce Files so that the Apex controller class can access it.","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","next":{"pages":[],"description":""},"order":2,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"upload-your-key","sync_unique":"","title":"Upload Your Key","type":"basic","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

Upload Your Key

You must upload your key to Salesforce Files so that the Apex controller class can access it.

1. Log in to Salesforce. 2. Click **Files**. 3. Click **Upload File**. 4. Navigate to the directory where you saved the `einstein_platform.pem` file, select the file, and click Open. You should see the key file in the list of files owned by you. [block:image] { "images": [ { "image": [ "https://files.readme.io/4588eea-files_key.png", "files_key.png", 937, 242, "#f2f9fa" ] } ] } [/block] [block:callout] { "type": "info", "title": "Tip", "body": "The key file was previously named `predictive_services.pem`. If you signed up at an earlier time and you can't file your key file, try searching for a file by this name." } [/block]
1. Log in to Salesforce. 2. Click **Files**. 3. Click **Upload File**. 4. Navigate to the directory where you saved the `einstein_platform.pem` file, select the file, and click Open. You should see the key file in the list of files owned by you. [block:image] { "images": [ { "image": [ "https://files.readme.io/4588eea-files_key.png", "files_key.png", 937, 242, "#f2f9fa" ] } ] } [/block] [block:callout] { "type": "info", "title": "Tip", "body": "The key file was previously named `predictive_services.pem`. If you signed up at an earlier time and you can't file your key file, try searching for a file by this name." } [/block]
{"__v":2,"_id":"57ed9909f0f1912400bec10f","api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"body":"1. Clone the JWT repo by using this command.\n```git clone https://github.com/salesforceidentity/jwt```\n\n2. Clone the Apex code repo by using this command.\n```git clone https://github.com/MetaMind/apex-utils```","category":"57eecc61095dda17004b3bb7","createdAt":"2016-09-29T22:43:21.085Z","excerpt":"Now that you’ve uploaded your key, get the code from GitHub.","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":4,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"apex-qs-get-the-code","sync_unique":"","title":"Get the Code","type":"basic","updates":["58a5d20e79ac232f00cbaf72"],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

Get the Code

Now that you’ve uploaded your key, get the code from GitHub.

1. Clone the JWT repo by using this command. ```git clone https://github.com/salesforceidentity/jwt``` 2. Clone the Apex code repo by using this command. ```git clone https://github.com/MetaMind/apex-utils```
1. Clone the JWT repo by using this command. ```git clone https://github.com/salesforceidentity/jwt``` 2. Clone the Apex code repo by using this command. ```git clone https://github.com/MetaMind/apex-utils```
{"__v":1,"_id":"57ed9a79d707a824005fa45a","api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"body":"1. Log in to Salesforce.\n\n2. From Setup, enter `Remote Site` in the `Quick Find` box, then select **Remote Site Settings**. \n\n3. Click **New Remote Site**. \n\n4. Enter a name for the remote site.\n\n5. In the Remote Site URL field, enter `https://api.metamind.io`. \n\n6. Click **Save**.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/d316e9b-remote_site.png\",\n        \"remote_site.png\",\n        329,\n        139,\n        \"#e4e7d8\"\n      ]\n    }\n  ]\n}\n[/block]","category":"57eecc61095dda17004b3bb7","createdAt":"2016-09-29T22:49:29.135Z","excerpt":"Before you can call the Einstein Vision API from Apex, you must add the API endpoint as a remote site.","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":5,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"apex-qs-create-remote-site","sync_unique":"","title":"Create a Remote Site","type":"basic","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

Create a Remote Site

Before you can call the Einstein Vision API from Apex, you must add the API endpoint as a remote site.

1. Log in to Salesforce. 2. From Setup, enter `Remote Site` in the `Quick Find` box, then select **Remote Site Settings**. 3. Click **New Remote Site**. 4. Enter a name for the remote site. 5. In the Remote Site URL field, enter `https://api.metamind.io`. 6. Click **Save**. [block:image] { "images": [ { "image": [ "https://files.readme.io/d316e9b-remote_site.png", "remote_site.png", 329, 139, "#e4e7d8" ] } ] } [/block]
1. Log in to Salesforce. 2. From Setup, enter `Remote Site` in the `Quick Find` box, then select **Remote Site Settings**. 3. Click **New Remote Site**. 4. Enter a name for the remote site. 5. In the Remote Site URL field, enter `https://api.metamind.io`. 6. Click **Save**. [block:image] { "images": [ { "image": [ "https://files.readme.io/d316e9b-remote_site.png", "remote_site.png", 329, 139, "#e4e7d8" ] } ] } [/block]
{"__v":1,"_id":"57eececbb79f200e00c354f2","api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"body":"1. In Salesforce, from Setup, enter `Apex Classes` in the Quick Find box, then select **Apex Classes**. \n \n2. Click **New**.\n\n3. To create the `JWT` Apex class, copy all the code from `JWT.apex` into the Apex Class tab and click Save.\n\n4. To create the `JWTBearerFlow` Apex class, go back to to the Apex Classes page, and click **New**.\n\n5. Copy all the code from `JWTBearer.apex` to the Apex Class tab and click **Save**.\n\n6. To create the `HttpFormBuilder` Apex class, go back to the Apex Classes page, and click **New**.\n\n7. Copy all the code from `HttpFormBuilder.apex` into the Apex Class tab and click **Save**.\n\n8. To create the `Vision` Apex class, go back to the Apex Classes page, and click **New**.\n\n9. Copy all the code from `Vision.apex` into the Apex Class tab and click **Save**.\n\n10. To create the `VisionController` Apex class, go back to the Apex Classes page, and click **New**.\n\n11. Copy the VisionController code from the apex-utils `README.md` into the Apex Class tab. This class is all the code from `public class VisionController {` to the closing brace `}`. In this example, the expiration is one hour (3600 seconds).\n\n12. Update the `jwt.sub` placeholder text of `yourname@example.com` with your email address. Use your email address that’s contained in the Salesforce org you logged in to when you created an account. \n13. Click **Save**.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \" // Get a new token\\n JWT jwt = new JWT('RS256');\\n // jwt.cert = 'JWTCert'; // Uncomment this if you used a Salesforce certificate to sign up for an Einstein Platform account\\n jwt.pkcs8 = keyContents; // Comment this if you are using jwt.cert\\n jwt.iss = 'developer.force.com';\\n jwt.sub = 'yourname@example.com';\\n jwt.aud = 'https://api.metamind.io/v1/oauth2/token';\\n jwt.exp = '3600';\",\n      \"language\": \"text\"\n    }\n  ]\n}\n[/block]","category":"57eecc61095dda17004b3bb7","createdAt":"2016-09-30T20:44:59.590Z","excerpt":"In this step, you create the Apex classes that call the API and do all of the heavy lifting.","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":6,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"apex-qs-create-classes","sync_unique":"","title":"Create the Apex Classes","type":"basic","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

Create the Apex Classes

In this step, you create the Apex classes that call the API and do all of the heavy lifting.

1. In Salesforce, from Setup, enter `Apex Classes` in the Quick Find box, then select **Apex Classes**. 2. Click **New**. 3. To create the `JWT` Apex class, copy all the code from `JWT.apex` into the Apex Class tab and click Save. 4. To create the `JWTBearerFlow` Apex class, go back to to the Apex Classes page, and click **New**. 5. Copy all the code from `JWTBearer.apex` to the Apex Class tab and click **Save**. 6. To create the `HttpFormBuilder` Apex class, go back to the Apex Classes page, and click **New**. 7. Copy all the code from `HttpFormBuilder.apex` into the Apex Class tab and click **Save**. 8. To create the `Vision` Apex class, go back to the Apex Classes page, and click **New**. 9. Copy all the code from `Vision.apex` into the Apex Class tab and click **Save**. 10. To create the `VisionController` Apex class, go back to the Apex Classes page, and click **New**. 11. Copy the VisionController code from the apex-utils `README.md` into the Apex Class tab. This class is all the code from `public class VisionController {` to the closing brace `}`. In this example, the expiration is one hour (3600 seconds). 12. Update the `jwt.sub` placeholder text of `yourname@example.com` with your email address. Use your email address that’s contained in the Salesforce org you logged in to when you created an account. 13. Click **Save**. [block:code] { "codes": [ { "code": " // Get a new token\n JWT jwt = new JWT('RS256');\n // jwt.cert = 'JWTCert'; // Uncomment this if you used a Salesforce certificate to sign up for an Einstein Platform account\n jwt.pkcs8 = keyContents; // Comment this if you are using jwt.cert\n jwt.iss = 'developer.force.com';\n jwt.sub = 'yourname@example.com';\n jwt.aud = 'https://api.metamind.io/v1/oauth2/token';\n jwt.exp = '3600';", "language": "text" } ] } [/block]
1. In Salesforce, from Setup, enter `Apex Classes` in the Quick Find box, then select **Apex Classes**. 2. Click **New**. 3. To create the `JWT` Apex class, copy all the code from `JWT.apex` into the Apex Class tab and click Save. 4. To create the `JWTBearerFlow` Apex class, go back to to the Apex Classes page, and click **New**. 5. Copy all the code from `JWTBearer.apex` to the Apex Class tab and click **Save**. 6. To create the `HttpFormBuilder` Apex class, go back to the Apex Classes page, and click **New**. 7. Copy all the code from `HttpFormBuilder.apex` into the Apex Class tab and click **Save**. 8. To create the `Vision` Apex class, go back to the Apex Classes page, and click **New**. 9. Copy all the code from `Vision.apex` into the Apex Class tab and click **Save**. 10. To create the `VisionController` Apex class, go back to the Apex Classes page, and click **New**. 11. Copy the VisionController code from the apex-utils `README.md` into the Apex Class tab. This class is all the code from `public class VisionController {` to the closing brace `}`. In this example, the expiration is one hour (3600 seconds). 12. Update the `jwt.sub` placeholder text of `yourname@example.com` with your email address. Use your email address that’s contained in the Salesforce org you logged in to when you created an account. 13. Click **Save**. [block:code] { "codes": [ { "code": " // Get a new token\n JWT jwt = new JWT('RS256');\n // jwt.cert = 'JWTCert'; // Uncomment this if you used a Salesforce certificate to sign up for an Einstein Platform account\n jwt.pkcs8 = keyContents; // Comment this if you are using jwt.cert\n jwt.iss = 'developer.force.com';\n jwt.sub = 'yourname@example.com';\n jwt.aud = 'https://api.metamind.io/v1/oauth2/token';\n jwt.exp = '3600';", "language": "text" } ] } [/block]
{"__v":1,"_id":"57eed0ae7a53690e000abc2b","api":{"settings":"","results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"auth":"required","params":[],"url":""},"body":"1. In Salesforce, from Setup, enter `Visualforce` in the Quick Find box, then select **Visualforce Pages**. \n \n2. Click **New**.\n\n3. Enter a label and name of Predict.\n\n4. From the `README.md` file, copy all of the code from `<apex:page Controller=\"VisionController\">` to `</apex:page>` and paste it into the code editor.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/fbe5801-vf_page.png\",\n        \"vf_page.png\",\n        969,\n        774,\n        \"#f5f5f2\"\n      ]\n    }\n  ]\n}\n[/block]\n5. Click **Save**.\n\n6. Click **Preview** to test out the page.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/f6dab44-prediction.png\",\n        \"prediction.png\",\n        396,\n        333,\n        \"#f6f7f5\"\n      ]\n    }\n  ]\n}\n[/block]\nYour page shows the prediction results from the General Image Classifier, and the classifier is pretty sure it’s a picture of a tree frog.\n\nCongratulations! You wrote code to call the Einstein Vision API to make a prediction with an image, and all from within your Salesforce org.","category":"57eecc61095dda17004b3bb7","createdAt":"2016-09-30T20:53:02.193Z","excerpt":"Now you create a Visualforce page that calls the classes that you just created to make a prediction.","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":7,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"apex-qs-create-vf-page","sync_unique":"","title":"Create the Visualforce Page","type":"basic","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

Create the Visualforce Page

Now you create a Visualforce page that calls the classes that you just created to make a prediction.

1. In Salesforce, from Setup, enter `Visualforce` in the Quick Find box, then select **Visualforce Pages**. 2. Click **New**. 3. Enter a label and name of Predict. 4. From the `README.md` file, copy all of the code from `<apex:page Controller="VisionController">` to `</apex:page>` and paste it into the code editor. [block:image] { "images": [ { "image": [ "https://files.readme.io/fbe5801-vf_page.png", "vf_page.png", 969, 774, "#f5f5f2" ] } ] } [/block] 5. Click **Save**. 6. Click **Preview** to test out the page. [block:image] { "images": [ { "image": [ "https://files.readme.io/f6dab44-prediction.png", "prediction.png", 396, 333, "#f6f7f5" ] } ] } [/block] Your page shows the prediction results from the General Image Classifier, and the classifier is pretty sure it’s a picture of a tree frog. Congratulations! You wrote code to call the Einstein Vision API to make a prediction with an image, and all from within your Salesforce org.
1. In Salesforce, from Setup, enter `Visualforce` in the Quick Find box, then select **Visualforce Pages**. 2. Click **New**. 3. Enter a label and name of Predict. 4. From the `README.md` file, copy all of the code from `<apex:page Controller="VisionController">` to `</apex:page>` and paste it into the code editor. [block:image] { "images": [ { "image": [ "https://files.readme.io/fbe5801-vf_page.png", "vf_page.png", 969, 774, "#f5f5f2" ] } ] } [/block] 5. Click **Save**. 6. Click **Preview** to test out the page. [block:image] { "images": [ { "image": [ "https://files.readme.io/f6dab44-prediction.png", "prediction.png", 396, 333, "#f6f7f5" ] } ] } [/block] Your page shows the prediction results from the General Image Classifier, and the classifier is pretty sure it’s a picture of a tree frog. Congratulations! You wrote code to call the Einstein Vision API to make a prediction with an image, and all from within your Salesforce org.
{"__v":1,"_id":"57dee7f884019d2000e95aea","api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"body":"After you've mastered the basics, it's time to step through creating your own image classifier and testing it out. You use the Einstein Vision REST API for all these tasks.\n\nIf you need help as you go through these steps, check out the [Einstein Platform developer forum](https://developer.salesforce.com/forums?communityId=09aF00000004HMGIA2#!/feedtype=RECENT&dc=Predictive_Services&criteria=ALLQUESTIONS) on Salesforce Developers.\n\nHere's the scenario: you’re a developer who works for a company that sells outdoor sporting gear. The company has automation that monitors social media channels. When someone posts a photo, the company wants to know whether the photo was taken at the beach or in the mountains. Based on where the photo was taken, the company can make targeted product recommendations to its customers.\n \nTo perform that kind of analysis manually requires multiple people. In addition, manual analysis is slow, so it’s likely that the company couldn’t respond until well after the photo was posted. You’ve been tasked with implementing automation that can solve this problem.\n \nYour task is straightforward: create a model that can identify whether an image is of the beach or the mountains. Then test the model with an image of a beach scene.","category":"57dc74f4ea7c0d1700f1d4d4","createdAt":"2016-09-18T19:16:08.208Z","excerpt":"","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":0,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"scenario","sync_unique":"","title":"Scenario","type":"basic","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

Scenario


After you've mastered the basics, it's time to step through creating your own image classifier and testing it out. You use the Einstein Vision REST API for all these tasks. If you need help as you go through these steps, check out the [Einstein Platform developer forum](https://developer.salesforce.com/forums?communityId=09aF00000004HMGIA2#!/feedtype=RECENT&dc=Predictive_Services&criteria=ALLQUESTIONS) on Salesforce Developers. Here's the scenario: you’re a developer who works for a company that sells outdoor sporting gear. The company has automation that monitors social media channels. When someone posts a photo, the company wants to know whether the photo was taken at the beach or in the mountains. Based on where the photo was taken, the company can make targeted product recommendations to its customers. To perform that kind of analysis manually requires multiple people. In addition, manual analysis is slow, so it’s likely that the company couldn’t respond until well after the photo was posted. You’ve been tasked with implementing automation that can solve this problem. Your task is straightforward: create a model that can identify whether an image is of the beach or the mountains. Then test the model with an image of a beach scene.
After you've mastered the basics, it's time to step through creating your own image classifier and testing it out. You use the Einstein Vision REST API for all these tasks. If you need help as you go through these steps, check out the [Einstein Platform developer forum](https://developer.salesforce.com/forums?communityId=09aF00000004HMGIA2#!/feedtype=RECENT&dc=Predictive_Services&criteria=ALLQUESTIONS) on Salesforce Developers. Here's the scenario: you’re a developer who works for a company that sells outdoor sporting gear. The company has automation that monitors social media channels. When someone posts a photo, the company wants to know whether the photo was taken at the beach or in the mountains. Based on where the photo was taken, the company can make targeted product recommendations to its customers. To perform that kind of analysis manually requires multiple people. In addition, manual analysis is slow, so it’s likely that the company couldn’t respond until well after the photo was posted. You’ve been tasked with implementing automation that can solve this problem. Your task is straightforward: create a model that can identify whether an image is of the beach or the mountains. Then test the model with an image of a beach scene.
{"__v":1,"_id":"57dee8020b50fc0e00554d0d","api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"body":"- **Sign up for an account**—Follow the steps in [What You Need to Call the API](doc:what-you-need-to-call-api) to set up your Einstein Platform account.\n\n- **Find your key file**—If you've already created an account, locate the `einstein_platform.pem` file that you downloaded (previously named `predictive_services.pem`)  as part of that process. This file contains your private key.\n\n- **Install cURL**—We’ll be using the cURL command line tool throughout the following steps. This tool is installed by default on Linux and OSX. If you don’t already have it installed, download it from [https://curl.haxx.se/download.html](https://curl.haxx.se/download.html)","category":"57dc74f4ea7c0d1700f1d4d4","createdAt":"2016-09-18T19:16:18.583Z","excerpt":"","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":1,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"prerequisites","sync_unique":"","title":"Prerequisites","type":"basic","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

Prerequisites


- **Sign up for an account**—Follow the steps in [What You Need to Call the API](doc:what-you-need-to-call-api) to set up your Einstein Platform account. - **Find your key file**—If you've already created an account, locate the `einstein_platform.pem` file that you downloaded (previously named `predictive_services.pem`) as part of that process. This file contains your private key. - **Install cURL**—We’ll be using the cURL command line tool throughout the following steps. This tool is installed by default on Linux and OSX. If you don’t already have it installed, download it from [https://curl.haxx.se/download.html](https://curl.haxx.se/download.html)
- **Sign up for an account**—Follow the steps in [What You Need to Call the API](doc:what-you-need-to-call-api) to set up your Einstein Platform account. - **Find your key file**—If you've already created an account, locate the `einstein_platform.pem` file that you downloaded (previously named `predictive_services.pem`) as part of that process. This file contains your private key. - **Install cURL**—We’ll be using the cURL command line tool throughout the following steps. This tool is installed by default on Linux and OSX. If you don’t already have it installed, download it from [https://curl.haxx.se/download.html](https://curl.haxx.se/download.html)
{"__v":1,"_id":"57dee81ca31fca170074f2f9","api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"body":"1. Type your email address or account ID. \n - If you signed up using Salesforce, use the email address associated with your user in the Salesforce org you logged in to when you signed up. \n - If you signed up using Heroku, use the account ID contained in the `EINSTEIN_VISION_ACCOUNT_ID` config variable.\n\n\n2.  Click **Browse** and navigate to the `einstein_platform.pem` file that you downloaded when you signed up for an account. \n[block:callout]\n{\n  \"type\": \"info\",\n  \"title\": \"Tip\",\n  \"body\": \"The key file was previously named `predictive_services.pem`. If you signed up at an earlier time and you can't file your key file, try searching for a file by this name.\"\n}\n[/block]\n3.  Set the number of minutes after which the token expires.\n\n4.  Click **Get Token**. You can now cut and paste the JWT token from the page.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/f296c5e-token_page_with_token.png\",\n        \"token_page_with_token.png\",\n        436,\n        830,\n        \"#0f86b7\"\n      ]\n    }\n  ]\n}\n[/block]\nThis page gives you a quick way to generate a token. In your app, you'll need to add the code that creates an assertion and then calls the API to generate a token. See [Generate an OAuth Token](doc:generate-an-oauth-token).\n[block:callout]\n{\n  \"type\": \"info\",\n  \"title\": \"Tip\",\n  \"body\": \"The token you create when you use this site doesn't automatically refresh. Your application must refresh the token based on the expiration time that you set when you create it.\"\n}\n[/block]","category":"57dc74f4ea7c0d1700f1d4d4","createdAt":"2016-09-18T19:16:44.212Z","excerpt":"The Einstein Vision API uses OAuth 2.0 JWT bearer token flow for authorization. Use the [token page](https://api.metamind.io/token) to upload your key file and generate a JWT token.","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":2,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"set-up-auth","sync_unique":"","title":"Set Up Authorization","type":"basic","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

Set Up Authorization

The Einstein Vision API uses OAuth 2.0 JWT bearer token flow for authorization. Use the [token page](https://api.metamind.io/token) to upload your key file and generate a JWT token.

1. Type your email address or account ID. - If you signed up using Salesforce, use the email address associated with your user in the Salesforce org you logged in to when you signed up. - If you signed up using Heroku, use the account ID contained in the `EINSTEIN_VISION_ACCOUNT_ID` config variable. 2. Click **Browse** and navigate to the `einstein_platform.pem` file that you downloaded when you signed up for an account. [block:callout] { "type": "info", "title": "Tip", "body": "The key file was previously named `predictive_services.pem`. If you signed up at an earlier time and you can't file your key file, try searching for a file by this name." } [/block] 3. Set the number of minutes after which the token expires. 4. Click **Get Token**. You can now cut and paste the JWT token from the page. [block:image] { "images": [ { "image": [ "https://files.readme.io/f296c5e-token_page_with_token.png", "token_page_with_token.png", 436, 830, "#0f86b7" ] } ] } [/block] This page gives you a quick way to generate a token. In your app, you'll need to add the code that creates an assertion and then calls the API to generate a token. See [Generate an OAuth Token](doc:generate-an-oauth-token). [block:callout] { "type": "info", "title": "Tip", "body": "The token you create when you use this site doesn't automatically refresh. Your application must refresh the token based on the expiration time that you set when you create it." } [/block]
1. Type your email address or account ID. - If you signed up using Salesforce, use the email address associated with your user in the Salesforce org you logged in to when you signed up. - If you signed up using Heroku, use the account ID contained in the `EINSTEIN_VISION_ACCOUNT_ID` config variable. 2. Click **Browse** and navigate to the `einstein_platform.pem` file that you downloaded when you signed up for an account. [block:callout] { "type": "info", "title": "Tip", "body": "The key file was previously named `predictive_services.pem`. If you signed up at an earlier time and you can't file your key file, try searching for a file by this name." } [/block] 3. Set the number of minutes after which the token expires. 4. Click **Get Token**. You can now cut and paste the JWT token from the page. [block:image] { "images": [ { "image": [ "https://files.readme.io/f296c5e-token_page_with_token.png", "token_page_with_token.png", 436, 830, "#0f86b7" ] } ] } [/block] This page gives you a quick way to generate a token. In your app, you'll need to add the code that creates an assertion and then calls the API to generate a token. See [Generate an OAuth Token](doc:generate-an-oauth-token). [block:callout] { "type": "info", "title": "Tip", "body": "The token you create when you use this site doesn't automatically refresh. Your application must refresh the token based on the expiration time that you set when you create it." } [/block]
{"__v":1,"_id":"57dee82ea31fca170074f2fa","api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"body":"In the following command, replace `<TOKEN>` with your JWT token and run the command. This command:\n\n- Creates a dataset called `beachvsmountains` from the specified .zip file\n- Creates two labels from the .zip file directories: a `Beaches` label and a `Mountains` label\n- Creates 49 examples named for the images in the Beaches directory and gives them the `Beaches` label\n- Creates 50 examples named for the images in the Mountains directory and gives them the `Mountains` label\n\n <sub>If you use the Service, Salesforce may make available certain images to you (\"Provided Images\"), which are licensed from a third party, as part of the Service. You agree that you will only use the Provided Images in connection with the Service, and you agree that you will not: modify, alter, create derivative works from, sell, sublicense, transfer, assign, or otherwise distribute the Provided Images to any third party.</sub>\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"path=http://metamind.io/images/mountainvsbeach.zip\\\" https://api.metamind.io/v1/vision/datasets/upload/sync\",\n      \"language\": \"curl\",\n      \"name\": null\n    }\n  ]\n}\n[/block]\nThis call is synchronous, so you'll see a response after all the images have finished uploading. The response contains the dataset ID and name as well as information about the labels and examples.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"id\\\": 1000044,\\n  \\\"name\\\": \\\"mountainvsbeach\\\",\\n  \\\"createdAt\\\": \\\"2017-02-21T21:59:29.000+0000\\\",\\n  \\\"updatedAt\\\": \\\"2017-02-21T21:59:29.000+0000\\\",\\n  \\\"labelSummary\\\": {\\n    \\\"labels\\\": [\\n      {\\n        \\\"id\\\": 1865,\\n        \\\"datasetId\\\": 1000044,\\n        \\\"name\\\": \\\"Mountains\\\",\\n        \\\"numExamples\\\": 50\\n      },\\n      {\\n        \\\"id\\\": 1866,\\n        \\\"datasetId\\\": 1000044,\\n        \\\"name\\\": \\\"Beaches\\\",\\n        \\\"numExamples\\\": 49\\n      }\\n    ]\\n  },\\n  \\\"totalExamples\\\": 99,\\n  \\\"totalLabels\\\": 2,\\n  \\\"available\\\": true,\\n  \\\"statusMsg\\\": \\\"SUCCEEDED\\\",\\n  \\\"type\\\": \\\"image\\\",\\n  \\\"object\\\": \\\"dataset\\\"\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]\n##Tell Me More##\nThere are other ways to work with datasets using the API. For example, use this command to return a list of all your datasets.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X GET -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" https://api.metamind.io/v1/vision/datasets\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nThe results look something like this.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"object\\\": \\\"list\\\",\\n  \\\"data\\\": [\\n    {\\n      \\\"id\\\": 1000044,\\n      \\\"name\\\": \\\"mountainvsbeach\\\",\\n      \\\"createdAt\\\": \\\"2017-02-21T21:59:29.000+0000\\\",\\n      \\\"updatedAt\\\": \\\"2017-02-21T21:59:29.000+0000\\\",\\n      \\\"labelSummary\\\": {\\n      \\\"labels\\\": [\\n        {\\n          \\\"id\\\": 1865,\\n          \\\"datasetId\\\": 1000044,\\n          \\\"name\\\": \\\"Mountains\\\",\\n          \\\"numExamples\\\": 50\\n        },\\n        {\\n          \\\"id\\\": 1866,\\n          \\\"datasetId\\\": 1000044,\\n          \\\"name\\\": \\\"Beaches\\\",\\n          \\\"numExamples\\\": 49\\n        }\\n      ]\\n    },\\n    \\\"totalExamples\\\": 99,\\n    \\\"totalLabels\\\": 2,\\n    \\\"available\\\": true,\\n    \\\"statusMsg\\\": \\\"SUCCEEDED\\\",\\n    \\\"type\\\": \\\"image\\\",\\n    \\\"object\\\": \\\"dataset\\\"\\n   },\\n   {\\n      \\\"id\\\": 1000045,\\n      \\\"name\\\": \\\"Brain Scans\\\",\\n      \\\"createdAt\\\": \\\"2017-02-21T22:04:06.000+0000\\\",\\n      \\\"updatedAt\\\": \\\"2017-02-21T22:04:06.000+0000\\\",\\n      \\\"labelSummary\\\": {\\n        \\\"labels\\\": []\\n      },\\n      \\\"totalExamples\\\": 0,\\n      \\\"totalLabels\\\": 0,\\n      \\\"available\\\": true,\\n      \\\"statusMsg\\\": \\\"SUCCEEDED\\\",\\n       \\\"type\\\": \\\"image\\\",\\n      \\\"object\\\": \\\"dataset\\\"\\n    }\\n  ]\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]\nTo delete a dataset, use the DELETE verb and pass in the dataset ID.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X DELETE -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nDeleting a dataset returns an HTTP status of 204, but no JSON response is returned.\n\nIn this scenario, the API call to create the dataset and upload the image data is synchronous. You can also make an asynchronous call to create a dataset. See [Ways to Create a Dataset](doc:ways-to-create-a-dataset) for more information about when to use the various APIs.","category":"57dc74f4ea7c0d1700f1d4d4","createdAt":"2016-09-18T19:17:02.894Z","excerpt":"The first step is to create the dataset that contains the beach and mountain images. You use this dataset to create the model.","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":3,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"step-1-create-the-dataset","sync_unique":"","title":"Step 1: Create the Dataset","type":"basic","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

Step 1: Create the Dataset

The first step is to create the dataset that contains the beach and mountain images. You use this dataset to create the model.

In the following command, replace `<TOKEN>` with your JWT token and run the command. This command: - Creates a dataset called `beachvsmountains` from the specified .zip file - Creates two labels from the .zip file directories: a `Beaches` label and a `Mountains` label - Creates 49 examples named for the images in the Beaches directory and gives them the `Beaches` label - Creates 50 examples named for the images in the Mountains directory and gives them the `Mountains` label <sub>If you use the Service, Salesforce may make available certain images to you ("Provided Images"), which are licensed from a third party, as part of the Service. You agree that you will only use the Provided Images in connection with the Service, and you agree that you will not: modify, alter, create derivative works from, sell, sublicense, transfer, assign, or otherwise distribute the Provided Images to any third party.</sub> [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"path=http://metamind.io/images/mountainvsbeach.zip\" https://api.metamind.io/v1/vision/datasets/upload/sync", "language": "curl", "name": null } ] } [/block] This call is synchronous, so you'll see a response after all the images have finished uploading. The response contains the dataset ID and name as well as information about the labels and examples. [block:code] { "codes": [ { "code": "{\n \"id\": 1000044,\n \"name\": \"mountainvsbeach\",\n \"createdAt\": \"2017-02-21T21:59:29.000+0000\",\n \"updatedAt\": \"2017-02-21T21:59:29.000+0000\",\n \"labelSummary\": {\n \"labels\": [\n {\n \"id\": 1865,\n \"datasetId\": 1000044,\n \"name\": \"Mountains\",\n \"numExamples\": 50\n },\n {\n \"id\": 1866,\n \"datasetId\": 1000044,\n \"name\": \"Beaches\",\n \"numExamples\": 49\n }\n ]\n },\n \"totalExamples\": 99,\n \"totalLabels\": 2,\n \"available\": true,\n \"statusMsg\": \"SUCCEEDED\",\n \"type\": \"image\",\n \"object\": \"dataset\"\n}", "language": "json" } ] } [/block] ##Tell Me More## There are other ways to work with datasets using the API. For example, use this command to return a list of all your datasets. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/datasets", "language": "curl" } ] } [/block] The results look something like this. [block:code] { "codes": [ { "code": "{\n \"object\": \"list\",\n \"data\": [\n {\n \"id\": 1000044,\n \"name\": \"mountainvsbeach\",\n \"createdAt\": \"2017-02-21T21:59:29.000+0000\",\n \"updatedAt\": \"2017-02-21T21:59:29.000+0000\",\n \"labelSummary\": {\n \"labels\": [\n {\n \"id\": 1865,\n \"datasetId\": 1000044,\n \"name\": \"Mountains\",\n \"numExamples\": 50\n },\n {\n \"id\": 1866,\n \"datasetId\": 1000044,\n \"name\": \"Beaches\",\n \"numExamples\": 49\n }\n ]\n },\n \"totalExamples\": 99,\n \"totalLabels\": 2,\n \"available\": true,\n \"statusMsg\": \"SUCCEEDED\",\n \"type\": \"image\",\n \"object\": \"dataset\"\n },\n {\n \"id\": 1000045,\n \"name\": \"Brain Scans\",\n \"createdAt\": \"2017-02-21T22:04:06.000+0000\",\n \"updatedAt\": \"2017-02-21T22:04:06.000+0000\",\n \"labelSummary\": {\n \"labels\": []\n },\n \"totalExamples\": 0,\n \"totalLabels\": 0,\n \"available\": true,\n \"statusMsg\": \"SUCCEEDED\",\n \"type\": \"image\",\n \"object\": \"dataset\"\n }\n ]\n}", "language": "json" } ] } [/block] To delete a dataset, use the DELETE verb and pass in the dataset ID. [block:code] { "codes": [ { "code": "curl -X DELETE -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>", "language": "curl" } ] } [/block] Deleting a dataset returns an HTTP status of 204, but no JSON response is returned. In this scenario, the API call to create the dataset and upload the image data is synchronous. You can also make an asynchronous call to create a dataset. See [Ways to Create a Dataset](doc:ways-to-create-a-dataset) for more information about when to use the various APIs.
In the following command, replace `<TOKEN>` with your JWT token and run the command. This command: - Creates a dataset called `beachvsmountains` from the specified .zip file - Creates two labels from the .zip file directories: a `Beaches` label and a `Mountains` label - Creates 49 examples named for the images in the Beaches directory and gives them the `Beaches` label - Creates 50 examples named for the images in the Mountains directory and gives them the `Mountains` label <sub>If you use the Service, Salesforce may make available certain images to you ("Provided Images"), which are licensed from a third party, as part of the Service. You agree that you will only use the Provided Images in connection with the Service, and you agree that you will not: modify, alter, create derivative works from, sell, sublicense, transfer, assign, or otherwise distribute the Provided Images to any third party.</sub> [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"path=http://metamind.io/images/mountainvsbeach.zip\" https://api.metamind.io/v1/vision/datasets/upload/sync", "language": "curl", "name": null } ] } [/block] This call is synchronous, so you'll see a response after all the images have finished uploading. The response contains the dataset ID and name as well as information about the labels and examples. [block:code] { "codes": [ { "code": "{\n \"id\": 1000044,\n \"name\": \"mountainvsbeach\",\n \"createdAt\": \"2017-02-21T21:59:29.000+0000\",\n \"updatedAt\": \"2017-02-21T21:59:29.000+0000\",\n \"labelSummary\": {\n \"labels\": [\n {\n \"id\": 1865,\n \"datasetId\": 1000044,\n \"name\": \"Mountains\",\n \"numExamples\": 50\n },\n {\n \"id\": 1866,\n \"datasetId\": 1000044,\n \"name\": \"Beaches\",\n \"numExamples\": 49\n }\n ]\n },\n \"totalExamples\": 99,\n \"totalLabels\": 2,\n \"available\": true,\n \"statusMsg\": \"SUCCEEDED\",\n \"type\": \"image\",\n \"object\": \"dataset\"\n}", "language": "json" } ] } [/block] ##Tell Me More## There are other ways to work with datasets using the API. For example, use this command to return a list of all your datasets. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/datasets", "language": "curl" } ] } [/block] The results look something like this. [block:code] { "codes": [ { "code": "{\n \"object\": \"list\",\n \"data\": [\n {\n \"id\": 1000044,\n \"name\": \"mountainvsbeach\",\n \"createdAt\": \"2017-02-21T21:59:29.000+0000\",\n \"updatedAt\": \"2017-02-21T21:59:29.000+0000\",\n \"labelSummary\": {\n \"labels\": [\n {\n \"id\": 1865,\n \"datasetId\": 1000044,\n \"name\": \"Mountains\",\n \"numExamples\": 50\n },\n {\n \"id\": 1866,\n \"datasetId\": 1000044,\n \"name\": \"Beaches\",\n \"numExamples\": 49\n }\n ]\n },\n \"totalExamples\": 99,\n \"totalLabels\": 2,\n \"available\": true,\n \"statusMsg\": \"SUCCEEDED\",\n \"type\": \"image\",\n \"object\": \"dataset\"\n },\n {\n \"id\": 1000045,\n \"name\": \"Brain Scans\",\n \"createdAt\": \"2017-02-21T22:04:06.000+0000\",\n \"updatedAt\": \"2017-02-21T22:04:06.000+0000\",\n \"labelSummary\": {\n \"labels\": []\n },\n \"totalExamples\": 0,\n \"totalLabels\": 0,\n \"available\": true,\n \"statusMsg\": \"SUCCEEDED\",\n \"type\": \"image\",\n \"object\": \"dataset\"\n }\n ]\n}", "language": "json" } ] } [/block] To delete a dataset, use the DELETE verb and pass in the dataset ID. [block:code] { "codes": [ { "code": "curl -X DELETE -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/datasets/<DATASET_ID>", "language": "curl" } ] } [/block] Deleting a dataset returns an HTTP status of 204, but no JSON response is returned. In this scenario, the API call to create the dataset and upload the image data is synchronous. You can also make an asynchronous call to create a dataset. See [Ways to Create a Dataset](doc:ways-to-create-a-dataset) for more information about when to use the various APIs.
{"__v":1,"_id":"57dee8740b50fc0e00554d0e","api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"body":"1. Now that you’ve added the labeled images to the dataset, it’s time to train the dataset. In this command, replace `TOKEN` with your token and `<DATASET_ID>` with your dataset ID, and then run it. This command trains the dataset and creates a model with the name specified in the name parameter.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=Beach and Mountain Model\\\" -F \\\"datasetId=<DATASET_ID>\\\" https://api.metamind.io/v1/vision/train\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nThe response contains information about the training status and looks like the following. Make a note of the `modelId` because you use this value in the next step.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"datasetId\\\": 1000038,\\n  \\\"datasetVersionId\\\": 0,\\n  \\\"name\\\": \\\"Beach and Mountain Model\\\",\\n  \\\"status\\\": \\\"QUEUED\\\",\\n  \\\"progress\\\": 0,\\n  \\\"createdAt\\\": \\\"2017-02-21T21:10:03.000+0000\\\",\\n  \\\"updatedAt\\\": \\\"2017-02-21T21:10:03.000+0000\\\",\\n  \\\"learningRate\\\": 0.001,\\n  \\\"epochs\\\": 3,\\n  \\\"queuePosition\\\": 1,\\n  \\\"object\\\": \\\"training\\\",\\n  \\\"modelId\\\": \\\"X76USM4Q3QRZRODBDTUGDZEHJU\\\",\\n  \\\"trainParams\\\": null,\\n  \\\"trainStats\\\": null,\\n  \\\"modelType\\\": \\\"image\\\"\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]\n2. Training a dataset can take a while depending on how many images the dataset contains. To get the training status, in this command, replace `TOKEN` with your token and `<YOUR_MODEL_ID>` the model ID, and then run the command.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X GET -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" https://api.metamind.io/v1/vision/train/<YOUR_MODEL_ID>\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nThe response returns the status of the training process. If it’s in progress, you see a status of `RUNNING`. When the training is complete, it returns a status of `SUCCEEDED` and a progress value of `1`.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"datasetId\\\": 1000072,\\n  \\\"datasetVersionId\\\": 0,\\n  \\\"name\\\": \\\"Beach and Mountain Model\\\",\\n  \\\"status\\\": \\\"SUCCEEDED\\\",\\n  \\\"progress\\\": 1,\\n  \\\"createdAt\\\": \\\"2017-02-21T22:08:52.000+0000\\\",\\n  \\\"updatedAt\\\": \\\"2017-02-21T22:10:20.000+0000\\\",\\n  \\\"learningRate\\\": 0.001,\\n  \\\"epochs\\\": 3,\\n  \\\"object\\\": \\\"training\\\",\\n  \\\"modelId\\\": \\\"X76USM4Q3QRZRODBDTUGDZEHJU\\\",\\n  \\\"trainParams\\\": null,\\n  \\\"trainStats\\\": {\\n    \\\"labels\\\": 2,\\n    \\\"examples\\\": 99,\\n    \\\"totalTime\\\": \\\"00:02:16:958\\\",\\n    \\\"trainingTime\\\": \\\"00:02:13:664\\\",\\n    \\\"lastEpochDone\\\": 3,\\n    \\\"modelSaveTime\\\": \\\"00:00:01:871\\\",\\n    \\\"testSplitSize\\\": 6,\\n    \\\"trainSplitSize\\\": 93,\\n    \\\"datasetLoadTime\\\": \\\"00:00:03:270\\\"\\n  },\\n  \\\"modelType\\\": \\\"image\\\"\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]\n##Tell Me More##\nAfter you create a model, you can retrieve metrics about the model, such as its accuracy, f1 score, and confusion matrix. You can use these values to tune and tweak your model. Use this call to get the model metrics.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X GET -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" https://api.metamind.io/v1/vision/models/<MODEL_ID>\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nThe command returns a response similar to this one.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"metricsData\\\": {\\n    \\\"f1\\\": [\\n        0.9090909090909092,\\n        0.9411764705882352\\n    ],\\n    \\\"labels\\\": [\\n      \\\"Mountains\\\",\\n      \\\"Beaches\\\"\\n    ],\\n    \\\"testAccuracy\\\": 0.9231,\\n    \\\"trainingLoss\\\": 0.0286,\\n    \\\"confusionMatrix\\\": [\\n        [\\n            4,\\n            0\\n        ],\\n        [\\n            1,\\n            8\\n        ]\\n    ],\\n    \\\"trainingAccuracy\\\": 0.9888\\n  },\\n  \\\"createdAt\\\": \\\"2017-02-21T22:19:25.000+0000\\\",\\n  \\\"id\\\": \\\"X76USM4Q3QRZRODBDTUGDZEHJU\\\",\\n  \\\"object\\\": \\\"metrics\\\"\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]\nTo see the model metrics for each training iteration (epoch) performed to create the model, call the learning curve API. See [Get Model Learning Curve](doc:get-model-learning-curve).","category":"57dc74f4ea7c0d1700f1d4d4","createdAt":"2016-09-18T19:18:12.443Z","excerpt":"Training the dataset creates the model that delivers the predictions.","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":5,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"step-2-train-the-dataset","sync_unique":"","title":"Step 2: Train the Dataset","type":"basic","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

Step 2: Train the Dataset

Training the dataset creates the model that delivers the predictions.

1. Now that you’ve added the labeled images to the dataset, it’s time to train the dataset. In this command, replace `TOKEN` with your token and `<DATASET_ID>` with your dataset ID, and then run it. This command trains the dataset and creates a model with the name specified in the name parameter. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=Beach and Mountain Model\" -F \"datasetId=<DATASET_ID>\" https://api.metamind.io/v1/vision/train", "language": "curl" } ] } [/block] The response contains information about the training status and looks like the following. Make a note of the `modelId` because you use this value in the next step. [block:code] { "codes": [ { "code": "{\n \"datasetId\": 1000038,\n \"datasetVersionId\": 0,\n \"name\": \"Beach and Mountain Model\",\n \"status\": \"QUEUED\",\n \"progress\": 0,\n \"createdAt\": \"2017-02-21T21:10:03.000+0000\",\n \"updatedAt\": \"2017-02-21T21:10:03.000+0000\",\n \"learningRate\": 0.001,\n \"epochs\": 3,\n \"queuePosition\": 1,\n \"object\": \"training\",\n \"modelId\": \"X76USM4Q3QRZRODBDTUGDZEHJU\",\n \"trainParams\": null,\n \"trainStats\": null,\n \"modelType\": \"image\"\n}", "language": "json" } ] } [/block] 2. Training a dataset can take a while depending on how many images the dataset contains. To get the training status, in this command, replace `TOKEN` with your token and `<YOUR_MODEL_ID>` the model ID, and then run the command. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/train/<YOUR_MODEL_ID>", "language": "curl" } ] } [/block] The response returns the status of the training process. If it’s in progress, you see a status of `RUNNING`. When the training is complete, it returns a status of `SUCCEEDED` and a progress value of `1`. [block:code] { "codes": [ { "code": "{\n \"datasetId\": 1000072,\n \"datasetVersionId\": 0,\n \"name\": \"Beach and Mountain Model\",\n \"status\": \"SUCCEEDED\",\n \"progress\": 1,\n \"createdAt\": \"2017-02-21T22:08:52.000+0000\",\n \"updatedAt\": \"2017-02-21T22:10:20.000+0000\",\n \"learningRate\": 0.001,\n \"epochs\": 3,\n \"object\": \"training\",\n \"modelId\": \"X76USM4Q3QRZRODBDTUGDZEHJU\",\n \"trainParams\": null,\n \"trainStats\": {\n \"labels\": 2,\n \"examples\": 99,\n \"totalTime\": \"00:02:16:958\",\n \"trainingTime\": \"00:02:13:664\",\n \"lastEpochDone\": 3,\n \"modelSaveTime\": \"00:00:01:871\",\n \"testSplitSize\": 6,\n \"trainSplitSize\": 93,\n \"datasetLoadTime\": \"00:00:03:270\"\n },\n \"modelType\": \"image\"\n}", "language": "json" } ] } [/block] ##Tell Me More## After you create a model, you can retrieve metrics about the model, such as its accuracy, f1 score, and confusion matrix. You can use these values to tune and tweak your model. Use this call to get the model metrics. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/models/<MODEL_ID>", "language": "curl" } ] } [/block] The command returns a response similar to this one. [block:code] { "codes": [ { "code": "{\n \"metricsData\": {\n \"f1\": [\n 0.9090909090909092,\n 0.9411764705882352\n ],\n \"labels\": [\n \"Mountains\",\n \"Beaches\"\n ],\n \"testAccuracy\": 0.9231,\n \"trainingLoss\": 0.0286,\n \"confusionMatrix\": [\n [\n 4,\n 0\n ],\n [\n 1,\n 8\n ]\n ],\n \"trainingAccuracy\": 0.9888\n },\n \"createdAt\": \"2017-02-21T22:19:25.000+0000\",\n \"id\": \"X76USM4Q3QRZRODBDTUGDZEHJU\",\n \"object\": \"metrics\"\n}", "language": "json" } ] } [/block] To see the model metrics for each training iteration (epoch) performed to create the model, call the learning curve API. See [Get Model Learning Curve](doc:get-model-learning-curve).
1. Now that you’ve added the labeled images to the dataset, it’s time to train the dataset. In this command, replace `TOKEN` with your token and `<DATASET_ID>` with your dataset ID, and then run it. This command trains the dataset and creates a model with the name specified in the name parameter. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=Beach and Mountain Model\" -F \"datasetId=<DATASET_ID>\" https://api.metamind.io/v1/vision/train", "language": "curl" } ] } [/block] The response contains information about the training status and looks like the following. Make a note of the `modelId` because you use this value in the next step. [block:code] { "codes": [ { "code": "{\n \"datasetId\": 1000038,\n \"datasetVersionId\": 0,\n \"name\": \"Beach and Mountain Model\",\n \"status\": \"QUEUED\",\n \"progress\": 0,\n \"createdAt\": \"2017-02-21T21:10:03.000+0000\",\n \"updatedAt\": \"2017-02-21T21:10:03.000+0000\",\n \"learningRate\": 0.001,\n \"epochs\": 3,\n \"queuePosition\": 1,\n \"object\": \"training\",\n \"modelId\": \"X76USM4Q3QRZRODBDTUGDZEHJU\",\n \"trainParams\": null,\n \"trainStats\": null,\n \"modelType\": \"image\"\n}", "language": "json" } ] } [/block] 2. Training a dataset can take a while depending on how many images the dataset contains. To get the training status, in this command, replace `TOKEN` with your token and `<YOUR_MODEL_ID>` the model ID, and then run the command. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/train/<YOUR_MODEL_ID>", "language": "curl" } ] } [/block] The response returns the status of the training process. If it’s in progress, you see a status of `RUNNING`. When the training is complete, it returns a status of `SUCCEEDED` and a progress value of `1`. [block:code] { "codes": [ { "code": "{\n \"datasetId\": 1000072,\n \"datasetVersionId\": 0,\n \"name\": \"Beach and Mountain Model\",\n \"status\": \"SUCCEEDED\",\n \"progress\": 1,\n \"createdAt\": \"2017-02-21T22:08:52.000+0000\",\n \"updatedAt\": \"2017-02-21T22:10:20.000+0000\",\n \"learningRate\": 0.001,\n \"epochs\": 3,\n \"object\": \"training\",\n \"modelId\": \"X76USM4Q3QRZRODBDTUGDZEHJU\",\n \"trainParams\": null,\n \"trainStats\": {\n \"labels\": 2,\n \"examples\": 99,\n \"totalTime\": \"00:02:16:958\",\n \"trainingTime\": \"00:02:13:664\",\n \"lastEpochDone\": 3,\n \"modelSaveTime\": \"00:00:01:871\",\n \"testSplitSize\": 6,\n \"trainSplitSize\": 93,\n \"datasetLoadTime\": \"00:00:03:270\"\n },\n \"modelType\": \"image\"\n}", "language": "json" } ] } [/block] ##Tell Me More## After you create a model, you can retrieve metrics about the model, such as its accuracy, f1 score, and confusion matrix. You can use these values to tune and tweak your model. Use this call to get the model metrics. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/models/<MODEL_ID>", "language": "curl" } ] } [/block] The command returns a response similar to this one. [block:code] { "codes": [ { "code": "{\n \"metricsData\": {\n \"f1\": [\n 0.9090909090909092,\n 0.9411764705882352\n ],\n \"labels\": [\n \"Mountains\",\n \"Beaches\"\n ],\n \"testAccuracy\": 0.9231,\n \"trainingLoss\": 0.0286,\n \"confusionMatrix\": [\n [\n 4,\n 0\n ],\n [\n 1,\n 8\n ]\n ],\n \"trainingAccuracy\": 0.9888\n },\n \"createdAt\": \"2017-02-21T22:19:25.000+0000\",\n \"id\": \"X76USM4Q3QRZRODBDTUGDZEHJU\",\n \"object\": \"metrics\"\n}", "language": "json" } ] } [/block] To see the model metrics for each training iteration (epoch) performed to create the model, call the learning curve API. See [Get Model Learning Curve](doc:get-model-learning-curve).
{"__v":1,"_id":"57dee892b269380e0020a0f5","api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"body":"You send an image to the model, and the model returns label names and probability values. The probability value is the prediction that the model makes for whether the image matches a label in its dataset. The higher the value, the higher the probability. \n\nYou can classify an image in these ways. \n- Reference the file by a URL\n- Upload the file by its path\n- Upload the image in a base64 string\n\nFor this example, you’ll reference this picture by the file URL.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/4d870d7-546212389.jpg\",\n        \"546212389.jpg\",\n        1024,\n        1024,\n        \"#cac9c5\"\n      ]\n    }\n  ]\n}\n[/block]\n1. In the following command, replace: \n - `<TOKEN>` with your JWT token\n - `<YOUR_MODEL_ID>` with the ID of the model that you created when you trained the dataset\n \nThen run the command from the command line.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"sampleLocation=http://metamind.io/images/546212389.jpg\\\" -F \\\"modelId=<YOUR_MODEL_ID>\\\" https://api.metamind.io/v1/vision/predict\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nThe model returns results similar to the following.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"probabilities\\\": [\\n    {\\n      \\\"label\\\": \\\"Beaches\\\",\\n      \\\"probability\\\": 0.97554934\\n    },\\n    {\\n      \\\"label\\\": \\\"Mountains\\\",\\n      \\\"probability\\\": 0.024450686\\n    }\\n  ],\\n  \\\"object\\\": \\\"predictresponse\\\"\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]\nThe model predicts that the image belongs in the beach label, and therefore, is a picture of a beach scene. The numeric prediction is contained in the `probability` field, and this value is anywhere from 0 (not at all likely) to 1 (very likely). \n\nIn this case, the model is about 98% sure that the image belongs in the beach label. The results are returned in descending order with the greatest probability first.\n\nIf you run a prediction against a model that’s still training, you receive a 403 error.\n\n[block:callout]\n{\n  \"type\": \"warning\",\n  \"title\": \"Caution\",\n  \"body\": \"The dataset used for this scenario contains only 99 images, which is considered a small dataset. When you build your own dataset and model, follow the guidance on the [Dataset and Model Best Practices](doc:dataset-and-model-best-practices) page and add a lot of data.\"\n}\n[/block]\n##Tell Me More##\nYou can also classify a local image by uploading the image. Instead of the `sampleLocation` parameter, pass in the `sampleContent` parameter, which contains the image file location of the file to upload.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"sampleContent=@C:\\\\Mountains vs Beach\\\\Beaches\\\\546212389.jpg\\\" -F \\\"modelId=<YOUR_MODEL_ID>\\\" https://api.metamind.io/v1/vision/predict\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nCreating the dataset and model are just the beginning. When you create your own model, be sure to test a range of images to ensure that it’s returning the results that you need.\n\nYou’ve done it! You’ve gone through the complete process of building a dataset, creating a model, and classifying images using the Einstein Vision API. You’re ready to take what you’ve learned and bring the power of deep learning to your users.","category":"57dc74f4ea7c0d1700f1d4d4","createdAt":"2016-09-18T19:18:42.562Z","excerpt":"Now that the data is uploaded and you created a model, you’re ready to use it to make predictions.","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":6,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"step-3-classify-an-image","sync_unique":"","title":"Step 3: Classify an Image","type":"basic","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

Step 3: Classify an Image

Now that the data is uploaded and you created a model, you’re ready to use it to make predictions.

You send an image to the model, and the model returns label names and probability values. The probability value is the prediction that the model makes for whether the image matches a label in its dataset. The higher the value, the higher the probability. You can classify an image in these ways. - Reference the file by a URL - Upload the file by its path - Upload the image in a base64 string For this example, you’ll reference this picture by the file URL. [block:image] { "images": [ { "image": [ "https://files.readme.io/4d870d7-546212389.jpg", "546212389.jpg", 1024, 1024, "#cac9c5" ] } ] } [/block] 1. In the following command, replace: - `<TOKEN>` with your JWT token - `<YOUR_MODEL_ID>` with the ID of the model that you created when you trained the dataset Then run the command from the command line. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleLocation=http://metamind.io/images/546212389.jpg\" -F \"modelId=<YOUR_MODEL_ID>\" https://api.metamind.io/v1/vision/predict", "language": "curl" } ] } [/block] The model returns results similar to the following. [block:code] { "codes": [ { "code": "{\n \"probabilities\": [\n {\n \"label\": \"Beaches\",\n \"probability\": 0.97554934\n },\n {\n \"label\": \"Mountains\",\n \"probability\": 0.024450686\n }\n ],\n \"object\": \"predictresponse\"\n}", "language": "json" } ] } [/block] The model predicts that the image belongs in the beach label, and therefore, is a picture of a beach scene. The numeric prediction is contained in the `probability` field, and this value is anywhere from 0 (not at all likely) to 1 (very likely). In this case, the model is about 98% sure that the image belongs in the beach label. The results are returned in descending order with the greatest probability first. If you run a prediction against a model that’s still training, you receive a 403 error. [block:callout] { "type": "warning", "title": "Caution", "body": "The dataset used for this scenario contains only 99 images, which is considered a small dataset. When you build your own dataset and model, follow the guidance on the [Dataset and Model Best Practices](doc:dataset-and-model-best-practices) page and add a lot of data." } [/block] ##Tell Me More## You can also classify a local image by uploading the image. Instead of the `sampleLocation` parameter, pass in the `sampleContent` parameter, which contains the image file location of the file to upload. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleContent=@C:\\Mountains vs Beach\\Beaches\\546212389.jpg\" -F \"modelId=<YOUR_MODEL_ID>\" https://api.metamind.io/v1/vision/predict", "language": "curl" } ] } [/block] Creating the dataset and model are just the beginning. When you create your own model, be sure to test a range of images to ensure that it’s returning the results that you need. You’ve done it! You’ve gone through the complete process of building a dataset, creating a model, and classifying images using the Einstein Vision API. You’re ready to take what you’ve learned and bring the power of deep learning to your users.
You send an image to the model, and the model returns label names and probability values. The probability value is the prediction that the model makes for whether the image matches a label in its dataset. The higher the value, the higher the probability. You can classify an image in these ways. - Reference the file by a URL - Upload the file by its path - Upload the image in a base64 string For this example, you’ll reference this picture by the file URL. [block:image] { "images": [ { "image": [ "https://files.readme.io/4d870d7-546212389.jpg", "546212389.jpg", 1024, 1024, "#cac9c5" ] } ] } [/block] 1. In the following command, replace: - `<TOKEN>` with your JWT token - `<YOUR_MODEL_ID>` with the ID of the model that you created when you trained the dataset Then run the command from the command line. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleLocation=http://metamind.io/images/546212389.jpg\" -F \"modelId=<YOUR_MODEL_ID>\" https://api.metamind.io/v1/vision/predict", "language": "curl" } ] } [/block] The model returns results similar to the following. [block:code] { "codes": [ { "code": "{\n \"probabilities\": [\n {\n \"label\": \"Beaches\",\n \"probability\": 0.97554934\n },\n {\n \"label\": \"Mountains\",\n \"probability\": 0.024450686\n }\n ],\n \"object\": \"predictresponse\"\n}", "language": "json" } ] } [/block] The model predicts that the image belongs in the beach label, and therefore, is a picture of a beach scene. The numeric prediction is contained in the `probability` field, and this value is anywhere from 0 (not at all likely) to 1 (very likely). In this case, the model is about 98% sure that the image belongs in the beach label. The results are returned in descending order with the greatest probability first. If you run a prediction against a model that’s still training, you receive a 403 error. [block:callout] { "type": "warning", "title": "Caution", "body": "The dataset used for this scenario contains only 99 images, which is considered a small dataset. When you build your own dataset and model, follow the guidance on the [Dataset and Model Best Practices](doc:dataset-and-model-best-practices) page and add a lot of data." } [/block] ##Tell Me More## You can also classify a local image by uploading the image. Instead of the `sampleLocation` parameter, pass in the `sampleContent` parameter, which contains the image file location of the file to upload. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleContent=@C:\\Mountains vs Beach\\Beaches\\546212389.jpg\" -F \"modelId=<YOUR_MODEL_ID>\" https://api.metamind.io/v1/vision/predict", "language": "curl" } ] } [/block] Creating the dataset and model are just the beginning. When you create your own model, be sure to test a range of images to ensure that it’s returning the results that you need. You’ve done it! You’ve gone through the complete process of building a dataset, creating a model, and classifying images using the Einstein Vision API. You’re ready to take what you’ve learned and bring the power of deep learning to your users.
{"__v":2,"_id":"57eec7f1cc36920e00bff4cd","api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"body":"- [Food Image Model](https://metamind.readme.io/docs/use-pre-built-models#section-food-image-model)\n- [General Image Model](https://metamind.readme.io/docs/use-pre-built-models#section-general-image-model)\n- [Scene Image Model](https://metamind.readme.io/docs/use-pre-built-models#section-scene-image-model)\n\n\n##Food Image Model##\nThis model is used to classify different foods and contains over 500 labels. You classify an image against this model just as you would a custom model; but instead of using the `modelId` of the custom model, you specify a `modelId` of `FoodImageClassifier`. For the list of classes this model contains, see [Food Image Model Class List](page:food-image-model-class-list).\n\nThis cURL command makes a prediction against the food model.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"sampleLocation=http://metamind.io/images/foodimage.jpg\\\" -F \\\"modelId=FoodImageClassifier\\\" https://api.metamind.io/v1/vision/predict\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nThe model returns a result similar to the following for the pizza image referenced by `http://metamind.io/images/foodimage.jpg`.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"probabilities\\\": [\\n    {\\n      \\\"label\\\": \\\"pizza\\\",\\n      \\\"probability\\\": 0.4895147383213043\\n    },\\n    {\\n      \\\"label\\\": \\\"flatbread\\\",\\n      \\\"probability\\\": 0.30357491970062256\\n    },\\n    {\\n      \\\"label\\\": \\\"focaccia\\\",\\n      \\\"probability\\\": 0.10683325678110123\\n    },\\n    {\\n      \\\"label\\\": \\\"frittata\\\",\\n      \\\"probability\\\": 0.05281512811779976\\n    },\\n    {\\n      \\\"label\\\": \\\"pepperoni\\\",\\n      \\\"probability\\\": 0.029621008783578873\\n    }\\n  ],\\n  \\\"object\\\": \\\"predictresponse\\\"\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]\n##General Image Model##\nThis model is used to classify a variety of images and contains thousands of labels. You can classify an image against this model just as you would a custom model; but instead of using the `modelId` of the custom model, you specify a `modelId` of `GeneralImageClassifier`. For the list of classes this model contains, see [General Image Model Class List](page:general-image-model-class-list).\n\nThis cURL command makes a prediction against the general image model.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"sampleLocation=http://metamind.io/images/generalimage.jpg\\\" -F \\\"modelId=GeneralImageClassifier\\\" https://api.metamind.io/v1/vision/predict\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nThe model return a result similar to the following for the tree frog image referenced by `http://metamind.io/images/generalimage.jpg`.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"probabilities\\\": [\\n    {\\n      \\\"label\\\": \\\"tree frog, tree-frog\\\",\\n      \\\"probability\\\": 0.7963114976882935\\n    },\\n    {\\n      \\\"label\\\": \\\"tailed frog, bell toad, ribbed toad, tailed toad, Ascaphus trui\\\",\\n      \\\"probability\\\": 0.1978749930858612\\n    },\\n    {\\n      \\\"label\\\": \\\"banded gecko\\\",\\n      \\\"probability\\\": 0.001511271228082478\\n    },\\n    {\\n      \\\"label\\\": \\\"African chameleon, Chamaeleo chamaeleon\\\",\\n      \\\"probability\\\": 0.0013212867779657245\\n    },\\n    {\\n      \\\"label\\\": \\\"bullfrog, Rana catesbeiana\\\",\\n      \\\"probability\\\": 0.0011536618694663048\\n    }\\n  ],\\n  \\\"object\\\": \\\"predictresponse\\\"\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]\n##Scene Image Model##\nThis model is used to classify a variety of indoor and outdoor scenes. You can classify an image against this model just as you would a custom model; but instead of using the `modelId` of the custom model, you specify a `modelId` of `SceneClassifier`. For the list of classes this model contains, see [Scene Image Model Class List](page:scene-image-model-class-list).\n\nThis cURL command makes a prediction against the general image model.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"sampleLocation=http://metamind.io/images/gym.jpg\\\" -F \\\"modelId=SceneClassifier\\\" https://api.metamind.io/v1/vision/predict\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nThe model return a result similar to the following for the image referenced by `http://metamind.io/images/gym.jpg`.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"probabilities\\\": [\\n    {\\n      \\\"label\\\": \\\"Gym interior\\\",\\n      \\\"probability\\\": 0.996387\\n    },\\n    {\\n      \\\"label\\\": \\\"Airport terminal\\\",\\n      \\\"probability\\\": 0.0025247275\\n    },\\n    {\\n      \\\"label\\\": \\\"Office or Cubicles\\\",\\n      \\\"probability\\\": 0.00049142947\\n    },\\n    {\\n      \\\"label\\\": \\\"Bus or train interior\\\",\\n      \\\"probability\\\": 0.00019321487\\n    },\\n    {\\n      \\\"label\\\": \\\"Restaurant patio\\\",\\n      \\\"probability\\\": 0.000069430374\\n    }\\n  ],\\n  \\\"object\\\": \\\"predictresponse\\\"\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]","category":"57eec6257a53690e000abc2a","createdAt":"2016-09-30T20:15:45.871Z","excerpt":"Einstein Vision offers pre-built models that you can use as long as you have a valid JWT token. These models are a good way to get started with the API because you can use them to work with and test the API without having to gather data and create your own model.","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":0,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"use-pre-built-models","sync_unique":"","title":"Use the Pre-Built Models","type":"basic","updates":["5919a90065320a0f00ef8d3f"],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

Use the Pre-Built Models

Einstein Vision offers pre-built models that you can use as long as you have a valid JWT token. These models are a good way to get started with the API because you can use them to work with and test the API without having to gather data and create your own model.

- [Food Image Model](https://metamind.readme.io/docs/use-pre-built-models#section-food-image-model) - [General Image Model](https://metamind.readme.io/docs/use-pre-built-models#section-general-image-model) - [Scene Image Model](https://metamind.readme.io/docs/use-pre-built-models#section-scene-image-model) ##Food Image Model## This model is used to classify different foods and contains over 500 labels. You classify an image against this model just as you would a custom model; but instead of using the `modelId` of the custom model, you specify a `modelId` of `FoodImageClassifier`. For the list of classes this model contains, see [Food Image Model Class List](page:food-image-model-class-list). This cURL command makes a prediction against the food model. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleLocation=http://metamind.io/images/foodimage.jpg\" -F \"modelId=FoodImageClassifier\" https://api.metamind.io/v1/vision/predict", "language": "curl" } ] } [/block] The model returns a result similar to the following for the pizza image referenced by `http://metamind.io/images/foodimage.jpg`. [block:code] { "codes": [ { "code": "{\n \"probabilities\": [\n {\n \"label\": \"pizza\",\n \"probability\": 0.4895147383213043\n },\n {\n \"label\": \"flatbread\",\n \"probability\": 0.30357491970062256\n },\n {\n \"label\": \"focaccia\",\n \"probability\": 0.10683325678110123\n },\n {\n \"label\": \"frittata\",\n \"probability\": 0.05281512811779976\n },\n {\n \"label\": \"pepperoni\",\n \"probability\": 0.029621008783578873\n }\n ],\n \"object\": \"predictresponse\"\n}", "language": "json" } ] } [/block] ##General Image Model## This model is used to classify a variety of images and contains thousands of labels. You can classify an image against this model just as you would a custom model; but instead of using the `modelId` of the custom model, you specify a `modelId` of `GeneralImageClassifier`. For the list of classes this model contains, see [General Image Model Class List](page:general-image-model-class-list). This cURL command makes a prediction against the general image model. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleLocation=http://metamind.io/images/generalimage.jpg\" -F \"modelId=GeneralImageClassifier\" https://api.metamind.io/v1/vision/predict", "language": "curl" } ] } [/block] The model return a result similar to the following for the tree frog image referenced by `http://metamind.io/images/generalimage.jpg`. [block:code] { "codes": [ { "code": "{\n \"probabilities\": [\n {\n \"label\": \"tree frog, tree-frog\",\n \"probability\": 0.7963114976882935\n },\n {\n \"label\": \"tailed frog, bell toad, ribbed toad, tailed toad, Ascaphus trui\",\n \"probability\": 0.1978749930858612\n },\n {\n \"label\": \"banded gecko\",\n \"probability\": 0.001511271228082478\n },\n {\n \"label\": \"African chameleon, Chamaeleo chamaeleon\",\n \"probability\": 0.0013212867779657245\n },\n {\n \"label\": \"bullfrog, Rana catesbeiana\",\n \"probability\": 0.0011536618694663048\n }\n ],\n \"object\": \"predictresponse\"\n}", "language": "json" } ] } [/block] ##Scene Image Model## This model is used to classify a variety of indoor and outdoor scenes. You can classify an image against this model just as you would a custom model; but instead of using the `modelId` of the custom model, you specify a `modelId` of `SceneClassifier`. For the list of classes this model contains, see [Scene Image Model Class List](page:scene-image-model-class-list). This cURL command makes a prediction against the general image model. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleLocation=http://metamind.io/images/gym.jpg\" -F \"modelId=SceneClassifier\" https://api.metamind.io/v1/vision/predict", "language": "curl" } ] } [/block] The model return a result similar to the following for the image referenced by `http://metamind.io/images/gym.jpg`. [block:code] { "codes": [ { "code": "{\n \"probabilities\": [\n {\n \"label\": \"Gym interior\",\n \"probability\": 0.996387\n },\n {\n \"label\": \"Airport terminal\",\n \"probability\": 0.0025247275\n },\n {\n \"label\": \"Office or Cubicles\",\n \"probability\": 0.00049142947\n },\n {\n \"label\": \"Bus or train interior\",\n \"probability\": 0.00019321487\n },\n {\n \"label\": \"Restaurant patio\",\n \"probability\": 0.000069430374\n }\n ],\n \"object\": \"predictresponse\"\n}", "language": "json" } ] } [/block]
- [Food Image Model](https://metamind.readme.io/docs/use-pre-built-models#section-food-image-model) - [General Image Model](https://metamind.readme.io/docs/use-pre-built-models#section-general-image-model) - [Scene Image Model](https://metamind.readme.io/docs/use-pre-built-models#section-scene-image-model) ##Food Image Model## This model is used to classify different foods and contains over 500 labels. You classify an image against this model just as you would a custom model; but instead of using the `modelId` of the custom model, you specify a `modelId` of `FoodImageClassifier`. For the list of classes this model contains, see [Food Image Model Class List](page:food-image-model-class-list). This cURL command makes a prediction against the food model. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleLocation=http://metamind.io/images/foodimage.jpg\" -F \"modelId=FoodImageClassifier\" https://api.metamind.io/v1/vision/predict", "language": "curl" } ] } [/block] The model returns a result similar to the following for the pizza image referenced by `http://metamind.io/images/foodimage.jpg`. [block:code] { "codes": [ { "code": "{\n \"probabilities\": [\n {\n \"label\": \"pizza\",\n \"probability\": 0.4895147383213043\n },\n {\n \"label\": \"flatbread\",\n \"probability\": 0.30357491970062256\n },\n {\n \"label\": \"focaccia\",\n \"probability\": 0.10683325678110123\n },\n {\n \"label\": \"frittata\",\n \"probability\": 0.05281512811779976\n },\n {\n \"label\": \"pepperoni\",\n \"probability\": 0.029621008783578873\n }\n ],\n \"object\": \"predictresponse\"\n}", "language": "json" } ] } [/block] ##General Image Model## This model is used to classify a variety of images and contains thousands of labels. You can classify an image against this model just as you would a custom model; but instead of using the `modelId` of the custom model, you specify a `modelId` of `GeneralImageClassifier`. For the list of classes this model contains, see [General Image Model Class List](page:general-image-model-class-list). This cURL command makes a prediction against the general image model. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleLocation=http://metamind.io/images/generalimage.jpg\" -F \"modelId=GeneralImageClassifier\" https://api.metamind.io/v1/vision/predict", "language": "curl" } ] } [/block] The model return a result similar to the following for the tree frog image referenced by `http://metamind.io/images/generalimage.jpg`. [block:code] { "codes": [ { "code": "{\n \"probabilities\": [\n {\n \"label\": \"tree frog, tree-frog\",\n \"probability\": 0.7963114976882935\n },\n {\n \"label\": \"tailed frog, bell toad, ribbed toad, tailed toad, Ascaphus trui\",\n \"probability\": 0.1978749930858612\n },\n {\n \"label\": \"banded gecko\",\n \"probability\": 0.001511271228082478\n },\n {\n \"label\": \"African chameleon, Chamaeleo chamaeleon\",\n \"probability\": 0.0013212867779657245\n },\n {\n \"label\": \"bullfrog, Rana catesbeiana\",\n \"probability\": 0.0011536618694663048\n }\n ],\n \"object\": \"predictresponse\"\n}", "language": "json" } ] } [/block] ##Scene Image Model## This model is used to classify a variety of indoor and outdoor scenes. You can classify an image against this model just as you would a custom model; but instead of using the `modelId` of the custom model, you specify a `modelId` of `SceneClassifier`. For the list of classes this model contains, see [Scene Image Model Class List](page:scene-image-model-class-list). This cURL command makes a prediction against the general image model. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleLocation=http://metamind.io/images/gym.jpg\" -F \"modelId=SceneClassifier\" https://api.metamind.io/v1/vision/predict", "language": "curl" } ] } [/block] The model return a result similar to the following for the image referenced by `http://metamind.io/images/gym.jpg`. [block:code] { "codes": [ { "code": "{\n \"probabilities\": [\n {\n \"label\": \"Gym interior\",\n \"probability\": 0.996387\n },\n {\n \"label\": \"Airport terminal\",\n \"probability\": 0.0025247275\n },\n {\n \"label\": \"Office or Cubicles\",\n \"probability\": 0.00049142947\n },\n {\n \"label\": \"Bus or train interior\",\n \"probability\": 0.00019321487\n },\n {\n \"label\": \"Restaurant patio\",\n \"probability\": 0.000069430374\n }\n ],\n \"object\": \"predictresponse\"\n}", "language": "json" } ] } [/block]
{"__v":0,"_id":"584ef98d85373a1b00143e16","api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"body":"##General API##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Method\",\n    \"h-1\": \"Call\",\n    \"0-0\": \"[Generate an OAuth Token](doc:generate-an-oauth-token)\",\n    \"0-1\": \"curl -H \\\"Content-type: application/x-www-form-urlencoded\\\" -X POST https://<span></span>api.metamind.io/v1/oauth2/token -d \\\"grant_type=urn:ietf:params:oauth:grant-type:jwt-bearer&assertion=**{ASSERTION_STRING}**\\\"\",\n    \"1-0\": \"[GET API Usage](doc:get-api-usage)\",\n    \"1-1\": \"curl -X GET -H \\\"Authorization: Bearer **{TOKEN}**\\\" -H \\\"Cache-Control: no-cache\\\" https://<span></span>api.metamind.io/v1/apiusage\"\n  },\n  \"cols\": 2,\n  \"rows\": 2\n}\n[/block]\n##Datasets##\n[block:parameters]\n{\n  \"data\": {\n    \"2-0\": \"[Create a Dataset](doc:create-a-dataset)\",\n    \"2-1\": \"curl -X POST -H \\\"Authorization: Bearer **{TOKEN}**\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=**{DATASET_NAME}**\\\" -F \\\"labels=**{LABEL1}**,**{LABEL2}**\\\" https://<span></span>api.metamind.io/v1/vision/datasets\",\n    \"3-0\": \"[Get a Dataset](doc:get-a-dataset)\",\n    \"4-0\": \"[Get All Datasets](doc:get-all-datasets)\",\n    \"3-1\": \"curl -X GET -H \\\"Authorization: Bearer **{TOKEN}**\\\" -H \\\"Cache-Control: no-cache\\\" https://<span></span>api.metamind.io/v1/vision/datasets/{DATSET_ID}\",\n    \"4-1\": \"curl -X GET -H \\\"Authorization: Bearer **{TOKEN}**\\\" -H \\\"Cache-Control: no-cache\\\" https://<span></span>api.metamind.io/v1/vision/datasets\",\n    \"h-0\": \"Method\",\n    \"h-1\": \"Call\",\n    \"0-0\": \"[Create a Dataset From a Zip File Asynchronously](doc:create-a-dataset-zip-async)\",\n    \"0-1\": \"curl -X POST -H \\\"Authorization: Bearer **{TOKEN}**\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"data=@**{PATH_TO}.zip**\\\"  https://<span></span>api.metamind.io/v1/vision/datasets/upload\\n\\ncurl -X POST -H \\\"Authorization: Bearer **{TOKEN}**\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"path=**{URL}.zip**\\\"  https://<span></span>api.metamind.io/v1/vision/datasets/upload\",\n    \"1-0\": \"[Create a Dataset From a Zip File Synchronously](doc:create-a-dataset-zip-sync)\",\n    \"1-1\": \"curl -X POST -H \\\"Authorization: Bearer **{TOKEN}**\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"data=@**{PATH_TO}.zip**\\\"  https://<span></span>api.metamind.io/v1/vision/datasets/upload/sync\\n\\ncurl -X POST -H \\\"Authorization: Bearer **{TOKEN}**\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"path=**{URL}.zip**\\\"  https://<span></span>api.metamind.io/v1/vision/datasets/upload/sync\"\n  },\n  \"cols\": 2,\n  \"rows\": 5\n}\n[/block]\n##Labels##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Method\",\n    \"h-1\": \"Call\",\n    \"0-0\": \"[Create a Label](doc:create-a-label)\",\n    \"1-0\": \"[Get a Label](doc:get-a-label)\",\n    \"0-1\": \"curl -X POST -H \\\"Authorization: Bearer **{TOKEN}**\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=**{LABEL_NAME}**\\\" https://<span></span>api.metamind.io/v1/vision/datasets/{DATASET_ID}/labels\",\n    \"1-1\": \"curl -X GET -H \\\"Authorization: Bearer **{TOKEN}**\\\" -H \\\"Cache-Control: no-cache\\\" https://<span></span>api.metamind.io/v1/vision/datasets/{DATSET_ID}/labels/{LABEL_ID}\"\n  },\n  \"cols\": 2,\n  \"rows\": 2\n}\n[/block]\n##Examples##\n[block:parameters]\n{\n  \"data\": {\n    \"1-0\": \"[Create an Example](doc:create-an-example)\",\n    \"1-1\": \"curl -X POST -H \\\"Authorization: Bearer **{TOKEN}**\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=**{EXAMPLE_NAME}**\\\" -F \\\"labelId={LABEL_ID}\\\" -F \\\"data=@{DIRECTORY/IMAGE_FILE}\\\" https://<span></span>api.metamind.io/v1/vision/datasets/{DATASET_ID}/examples\",\n    \"2-0\": \"[Get an Example](doc:get-an-example)\",\n    \"2-1\": \"curl -X GET -H \\\"Authorization: Bearer **{TOKEN}**\\\" -H \\\"Cache-Control: no-cache\\\" https://<span></span>api.metamind.io/v1/vision/datasets/{DATSET_ID}/examples/{EXAMPLE_ID}\",\n    \"3-0\": \"[Get All Examples](doc:get-all-examples)\",\n    \"3-1\": \"curl -X GET -H \\\"Authorization: Bearer **{TOKEN}**\\\" -H \\\"Cache-Control: no-cache\\\" https://<span></span>api.metamind.io/v1/vision/datasets/{DATSET_ID}/examples\",\n    \"4-0\": \"[Delete an Example](doc:delete-an-example)\",\n    \"4-1\": \"curl -X DELETE -H \\\"Authorization: Bearer **{TOKEN}**\\\" -H \\\"Cache-Control: no-cache\\\" https://<span></span>api.metamind.io/v1/vision/datasets/{DATSET_ID}/examples/{EXAMPLE_ID}\",\n    \"h-0\": \"Method\",\n    \"h-1\": \"Call\",\n    \"0-0\": \"[Create Examples from Zip File](doc:create-examples-from-zip)\",\n    \"0-1\": \"curl -X PUT -H \\\"Authorization: Bearer **{TOKEN}**\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"data=@**{PATH_TO}.zip**\\\"  https://<span></span>api.metamind.io/v1/vision/datasets/{DATSET_ID}/upload\\n\\ncurl -X PUT -H \\\"Authorization: Bearer **{TOKEN}**\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"path=**{URL}.zip**\\\"  https://<span></span>api.metamind.io/v1/vision/datasets/{DATSET_ID}/upload\"\n  },\n  \"cols\": 2,\n  \"rows\": 5\n}\n[/block]\n##Training and Models##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"[Train a Dataset](doc:train-a-dataset)\",\n    \"1-0\": \"[Get Training Status](doc:get-training-status)\",\n    \"2-0\": \"[Get Model Metrics](doc:get-model-metrics)\",\n    \"4-0\": \"[Get All Models](doc:get-all-models)\",\n    \"0-1\": \"curl -X POST -H \\\"Authorization: Bearer **{TOKEN}**\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=**{MODEL_NAME}**\\\" -F \\\"datasetId=**{DATASET_ID}**\\\" https://<span></span>api.metamind.io/v1/vision/train\",\n    \"1-1\": \"curl -X GET -H \\\"Authorization: Bearer **{TOKEN}**\\\" -H \\\"Cache-Control: no-cache\\\" https://<span></span>api.metamind.io/v1/vision/train/{MODEL_ID}\",\n    \"2-1\": \"curl -X GET -H \\\"Authorization: Bearer **{TOKEN}**\\\" -H \\\"Cache-Control: no-cache\\\" https://<span></span>api.metamind.io/v1/vision/models/{MODEL_ID}\",\n    \"4-1\": \"curl -X GET -H \\\"Authorization: Bearer **{TOKEN}**\\\" -H \\\"Cache-Control: no-cache\\\" https://<span></span>api.metamind.io/v1/vision/datasets/{DATSET_ID}/models\",\n    \"h-0\": \"Method\",\n    \"h-1\": \"Call\",\n    \"3-0\": \"[Get Model Learning Curve](doc:get-model-learning-curve)\",\n    \"3-1\": \"curl -X GET -H \\\"Authorization: Bearer **{TOKEN}**\\\" -H \\\"Cache-Control: no-cache\\\" https://<span></span>api.metamind.io/v1/vision/models/{MODEL_ID}/lc\"\n  },\n  \"cols\": 2,\n  \"rows\": 5\n}\n[/block]\n##Predictions##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"[Prediction with Image Base64 String](doc:prediction-with-image-base64-string)\",\n    \"1-0\": \"[Prediction with Image File](doc:prediction-with-image-file)\",\n    \"2-0\": \"[Prediction with Image URL](doc:prediction-with-image-url)\",\n    \"0-1\": \"curl -X POST -H \\\"Authorization: Bearer **{TOKEN}**\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"sampleBase64Content=**{BASE64_STRING}**\\\" -F \\\"modelId=**{MODEL_ID}**\\\" https://<span></span>api.metamind.io/v1/vision/predict\",\n    \"1-1\": \"curl -X POST -H \\\"Authorization: Bearer **{TOKEN}**\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"sampleContent=**{DIRECTORY/IMAGE_FILE}**\\\" -F \\\"modelId=**{MODEL_ID}**\\\" https://<span></span>api.metamind.io/v1/vision/predict\",\n    \"2-1\": \"curl -X POST -H \\\"Authorization: Bearer **{TOKEN}**\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"sampleLocation=**{IMAGE_URL}**\\\" -F \\\"modelId=**{MODEL_ID}**\\\" https://<span></span>api.metamind.io/v1/vision/predict\"\n  },\n  \"cols\": 2,\n  \"rows\": 3\n}\n[/block]","category":"57dee8de84019d2000e95af3","createdAt":"2016-12-12T19:25:01.689Z","excerpt":"A summary of the API calls you can make to programmatically work with datasets, labels, examples, models, and predictions.","githubsync":"","hidden":false,"isReference":true,"link_external":false,"link_url":"","next":{"pages":[],"description":""},"order":0,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"predictive-vision-service-api","sync_unique":"","title":"Einstein Vision API","type":"basic","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

Einstein Vision API

A summary of the API calls you can make to programmatically work with datasets, labels, examples, models, and predictions.

##General API## [block:parameters] { "data": { "h-0": "Method", "h-1": "Call", "0-0": "[Generate an OAuth Token](doc:generate-an-oauth-token)", "0-1": "curl -H \"Content-type: application/x-www-form-urlencoded\" -X POST https://<span></span>api.metamind.io/v1/oauth2/token -d \"grant_type=urn:ietf:params:oauth:grant-type:jwt-bearer&assertion=**{ASSERTION_STRING}**\"", "1-0": "[GET API Usage](doc:get-api-usage)", "1-1": "curl -X GET -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" https://<span></span>api.metamind.io/v1/apiusage" }, "cols": 2, "rows": 2 } [/block] ##Datasets## [block:parameters] { "data": { "2-0": "[Create a Dataset](doc:create-a-dataset)", "2-1": "curl -X POST -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=**{DATASET_NAME}**\" -F \"labels=**{LABEL1}**,**{LABEL2}**\" https://<span></span>api.metamind.io/v1/vision/datasets", "3-0": "[Get a Dataset](doc:get-a-dataset)", "4-0": "[Get All Datasets](doc:get-all-datasets)", "3-1": "curl -X GET -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" https://<span></span>api.metamind.io/v1/vision/datasets/{DATSET_ID}", "4-1": "curl -X GET -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" https://<span></span>api.metamind.io/v1/vision/datasets", "h-0": "Method", "h-1": "Call", "0-0": "[Create a Dataset From a Zip File Asynchronously](doc:create-a-dataset-zip-async)", "0-1": "curl -X POST -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"data=@**{PATH_TO}.zip**\" https://<span></span>api.metamind.io/v1/vision/datasets/upload\n\ncurl -X POST -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"path=**{URL}.zip**\" https://<span></span>api.metamind.io/v1/vision/datasets/upload", "1-0": "[Create a Dataset From a Zip File Synchronously](doc:create-a-dataset-zip-sync)", "1-1": "curl -X POST -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"data=@**{PATH_TO}.zip**\" https://<span></span>api.metamind.io/v1/vision/datasets/upload/sync\n\ncurl -X POST -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"path=**{URL}.zip**\" https://<span></span>api.metamind.io/v1/vision/datasets/upload/sync" }, "cols": 2, "rows": 5 } [/block] ##Labels## [block:parameters] { "data": { "h-0": "Method", "h-1": "Call", "0-0": "[Create a Label](doc:create-a-label)", "1-0": "[Get a Label](doc:get-a-label)", "0-1": "curl -X POST -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=**{LABEL_NAME}**\" https://<span></span>api.metamind.io/v1/vision/datasets/{DATASET_ID}/labels", "1-1": "curl -X GET -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" https://<span></span>api.metamind.io/v1/vision/datasets/{DATSET_ID}/labels/{LABEL_ID}" }, "cols": 2, "rows": 2 } [/block] ##Examples## [block:parameters] { "data": { "1-0": "[Create an Example](doc:create-an-example)", "1-1": "curl -X POST -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=**{EXAMPLE_NAME}**\" -F \"labelId={LABEL_ID}\" -F \"data=@{DIRECTORY/IMAGE_FILE}\" https://<span></span>api.metamind.io/v1/vision/datasets/{DATASET_ID}/examples", "2-0": "[Get an Example](doc:get-an-example)", "2-1": "curl -X GET -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" https://<span></span>api.metamind.io/v1/vision/datasets/{DATSET_ID}/examples/{EXAMPLE_ID}", "3-0": "[Get All Examples](doc:get-all-examples)", "3-1": "curl -X GET -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" https://<span></span>api.metamind.io/v1/vision/datasets/{DATSET_ID}/examples", "4-0": "[Delete an Example](doc:delete-an-example)", "4-1": "curl -X DELETE -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" https://<span></span>api.metamind.io/v1/vision/datasets/{DATSET_ID}/examples/{EXAMPLE_ID}", "h-0": "Method", "h-1": "Call", "0-0": "[Create Examples from Zip File](doc:create-examples-from-zip)", "0-1": "curl -X PUT -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"data=@**{PATH_TO}.zip**\" https://<span></span>api.metamind.io/v1/vision/datasets/{DATSET_ID}/upload\n\ncurl -X PUT -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"path=**{URL}.zip**\" https://<span></span>api.metamind.io/v1/vision/datasets/{DATSET_ID}/upload" }, "cols": 2, "rows": 5 } [/block] ##Training and Models## [block:parameters] { "data": { "0-0": "[Train a Dataset](doc:train-a-dataset)", "1-0": "[Get Training Status](doc:get-training-status)", "2-0": "[Get Model Metrics](doc:get-model-metrics)", "4-0": "[Get All Models](doc:get-all-models)", "0-1": "curl -X POST -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=**{MODEL_NAME}**\" -F \"datasetId=**{DATASET_ID}**\" https://<span></span>api.metamind.io/v1/vision/train", "1-1": "curl -X GET -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" https://<span></span>api.metamind.io/v1/vision/train/{MODEL_ID}", "2-1": "curl -X GET -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" https://<span></span>api.metamind.io/v1/vision/models/{MODEL_ID}", "4-1": "curl -X GET -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" https://<span></span>api.metamind.io/v1/vision/datasets/{DATSET_ID}/models", "h-0": "Method", "h-1": "Call", "3-0": "[Get Model Learning Curve](doc:get-model-learning-curve)", "3-1": "curl -X GET -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" https://<span></span>api.metamind.io/v1/vision/models/{MODEL_ID}/lc" }, "cols": 2, "rows": 5 } [/block] ##Predictions## [block:parameters] { "data": { "0-0": "[Prediction with Image Base64 String](doc:prediction-with-image-base64-string)", "1-0": "[Prediction with Image File](doc:prediction-with-image-file)", "2-0": "[Prediction with Image URL](doc:prediction-with-image-url)", "0-1": "curl -X POST -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleBase64Content=**{BASE64_STRING}**\" -F \"modelId=**{MODEL_ID}**\" https://<span></span>api.metamind.io/v1/vision/predict", "1-1": "curl -X POST -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleContent=**{DIRECTORY/IMAGE_FILE}**\" -F \"modelId=**{MODEL_ID}**\" https://<span></span>api.metamind.io/v1/vision/predict", "2-1": "curl -X POST -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleLocation=**{IMAGE_URL}**\" -F \"modelId=**{MODEL_ID}**\" https://<span></span>api.metamind.io/v1/vision/predict" }, "cols": 2, "rows": 3 } [/block]
##General API## [block:parameters] { "data": { "h-0": "Method", "h-1": "Call", "0-0": "[Generate an OAuth Token](doc:generate-an-oauth-token)", "0-1": "curl -H \"Content-type: application/x-www-form-urlencoded\" -X POST https://<span></span>api.metamind.io/v1/oauth2/token -d \"grant_type=urn:ietf:params:oauth:grant-type:jwt-bearer&assertion=**{ASSERTION_STRING}**\"", "1-0": "[GET API Usage](doc:get-api-usage)", "1-1": "curl -X GET -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" https://<span></span>api.metamind.io/v1/apiusage" }, "cols": 2, "rows": 2 } [/block] ##Datasets## [block:parameters] { "data": { "2-0": "[Create a Dataset](doc:create-a-dataset)", "2-1": "curl -X POST -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=**{DATASET_NAME}**\" -F \"labels=**{LABEL1}**,**{LABEL2}**\" https://<span></span>api.metamind.io/v1/vision/datasets", "3-0": "[Get a Dataset](doc:get-a-dataset)", "4-0": "[Get All Datasets](doc:get-all-datasets)", "3-1": "curl -X GET -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" https://<span></span>api.metamind.io/v1/vision/datasets/{DATSET_ID}", "4-1": "curl -X GET -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" https://<span></span>api.metamind.io/v1/vision/datasets", "h-0": "Method", "h-1": "Call", "0-0": "[Create a Dataset From a Zip File Asynchronously](doc:create-a-dataset-zip-async)", "0-1": "curl -X POST -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"data=@**{PATH_TO}.zip**\" https://<span></span>api.metamind.io/v1/vision/datasets/upload\n\ncurl -X POST -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"path=**{URL}.zip**\" https://<span></span>api.metamind.io/v1/vision/datasets/upload", "1-0": "[Create a Dataset From a Zip File Synchronously](doc:create-a-dataset-zip-sync)", "1-1": "curl -X POST -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"data=@**{PATH_TO}.zip**\" https://<span></span>api.metamind.io/v1/vision/datasets/upload/sync\n\ncurl -X POST -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"path=**{URL}.zip**\" https://<span></span>api.metamind.io/v1/vision/datasets/upload/sync" }, "cols": 2, "rows": 5 } [/block] ##Labels## [block:parameters] { "data": { "h-0": "Method", "h-1": "Call", "0-0": "[Create a Label](doc:create-a-label)", "1-0": "[Get a Label](doc:get-a-label)", "0-1": "curl -X POST -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=**{LABEL_NAME}**\" https://<span></span>api.metamind.io/v1/vision/datasets/{DATASET_ID}/labels", "1-1": "curl -X GET -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" https://<span></span>api.metamind.io/v1/vision/datasets/{DATSET_ID}/labels/{LABEL_ID}" }, "cols": 2, "rows": 2 } [/block] ##Examples## [block:parameters] { "data": { "1-0": "[Create an Example](doc:create-an-example)", "1-1": "curl -X POST -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=**{EXAMPLE_NAME}**\" -F \"labelId={LABEL_ID}\" -F \"data=@{DIRECTORY/IMAGE_FILE}\" https://<span></span>api.metamind.io/v1/vision/datasets/{DATASET_ID}/examples", "2-0": "[Get an Example](doc:get-an-example)", "2-1": "curl -X GET -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" https://<span></span>api.metamind.io/v1/vision/datasets/{DATSET_ID}/examples/{EXAMPLE_ID}", "3-0": "[Get All Examples](doc:get-all-examples)", "3-1": "curl -X GET -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" https://<span></span>api.metamind.io/v1/vision/datasets/{DATSET_ID}/examples", "4-0": "[Delete an Example](doc:delete-an-example)", "4-1": "curl -X DELETE -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" https://<span></span>api.metamind.io/v1/vision/datasets/{DATSET_ID}/examples/{EXAMPLE_ID}", "h-0": "Method", "h-1": "Call", "0-0": "[Create Examples from Zip File](doc:create-examples-from-zip)", "0-1": "curl -X PUT -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"data=@**{PATH_TO}.zip**\" https://<span></span>api.metamind.io/v1/vision/datasets/{DATSET_ID}/upload\n\ncurl -X PUT -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"path=**{URL}.zip**\" https://<span></span>api.metamind.io/v1/vision/datasets/{DATSET_ID}/upload" }, "cols": 2, "rows": 5 } [/block] ##Training and Models## [block:parameters] { "data": { "0-0": "[Train a Dataset](doc:train-a-dataset)", "1-0": "[Get Training Status](doc:get-training-status)", "2-0": "[Get Model Metrics](doc:get-model-metrics)", "4-0": "[Get All Models](doc:get-all-models)", "0-1": "curl -X POST -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=**{MODEL_NAME}**\" -F \"datasetId=**{DATASET_ID}**\" https://<span></span>api.metamind.io/v1/vision/train", "1-1": "curl -X GET -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" https://<span></span>api.metamind.io/v1/vision/train/{MODEL_ID}", "2-1": "curl -X GET -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" https://<span></span>api.metamind.io/v1/vision/models/{MODEL_ID}", "4-1": "curl -X GET -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" https://<span></span>api.metamind.io/v1/vision/datasets/{DATSET_ID}/models", "h-0": "Method", "h-1": "Call", "3-0": "[Get Model Learning Curve](doc:get-model-learning-curve)", "3-1": "curl -X GET -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" https://<span></span>api.metamind.io/v1/vision/models/{MODEL_ID}/lc" }, "cols": 2, "rows": 5 } [/block] ##Predictions## [block:parameters] { "data": { "0-0": "[Prediction with Image Base64 String](doc:prediction-with-image-base64-string)", "1-0": "[Prediction with Image File](doc:prediction-with-image-file)", "2-0": "[Prediction with Image URL](doc:prediction-with-image-url)", "0-1": "curl -X POST -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleBase64Content=**{BASE64_STRING}**\" -F \"modelId=**{MODEL_ID}**\" https://<span></span>api.metamind.io/v1/vision/predict", "1-1": "curl -X POST -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleContent=**{DIRECTORY/IMAGE_FILE}**\" -F \"modelId=**{MODEL_ID}**\" https://<span></span>api.metamind.io/v1/vision/predict", "2-1": "curl -X POST -H \"Authorization: Bearer **{TOKEN}**\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleLocation=**{IMAGE_URL}**\" -F \"modelId=**{MODEL_ID}**\" https://<span></span>api.metamind.io/v1/vision/predict" }, "cols": 2, "rows": 3 } [/block]
{"__v":0,"_id":"58a5e577243dd30f00fd86e5","api":{"auth":"required","examples":{"codes":[{"code":"curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"data=@C:\\Data\\mountainvsbeach.zip\"   https://api.metamind.io/v1/vision/datasets/upload\n\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"path=http://metamind.io/images/mountainvsbeach.zip\"  https://api.metamind.io/v1/vision/datasets/upload","language":"curl"}]},"method":"post","params":[],"results":{"codes":[{"name":"","code":"{\n  \"id\": 1000014,\n  \"name\": \"mountainvsbeach\",\n  \"createdAt\": \"2017-02-16T16:25:57.000+0000\",\n  \"updatedAt\": \"2017-02-16T16:25:57.000+0000\",\n  \"labelSummary\": {\n    \"labels\": []\n  },\n  \"totalExamples\": 0,\n  \"available\": false,\n  \"statusMsg\": \"UPLOADING\",\n  \"type\": \"image\",\n  \"object\": \"dataset\"\n}\n","language":"json","status":200},{"name":"","code":"{}","language":"json","status":400}]},"settings":"","url":"/vision/datasets/upload"},"body":"##Request Parameters##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"`data`\",\n    \"1-0\": \"`path`\",\n    \"0-1\": \"string\",\n    \"1-1\": \"string\",\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-2\": \"Path to the .zip file on the local drive (FilePart). The maximum .zip file size you can upload from a local drive is 50 MB.\",\n    \"0-3\": \"1.0\",\n    \"1-2\": \"URL of the .zip file. The maximum .zip file size you can upload from a web location is 1 GB.\",\n    \"1-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 2\n}\n[/block]\nYou must provide the path to the .zip file on either the local machine or in the cloud. This API:\n- Creates a dataset that has the same name as the .zip file (limit is 100 characters).\n- Creates a label for each directory in the .zip file. The label name is the same name as the directory name (limit is 20 characters).\n- Creates an example for each image file in each directory in the .zip file. The example name is the same as the image file name.\n\nThe API call is asynchronous, so you receive a dataset ID back immediately but the `available` value will be `false` and the `statusMsg` value will be `UPLOADING`. Use the dataset ID and make a call to [Get a Dataset](doc:get-a-dataset) to query when the upload is complete. When `available` is `true` and `statusMsg` is `SUCCEEDED`, the data upload is complete, and you can train the dataset to create a model. \n\nKeep the following points in mind when creating datasets.\n- If your .zip file is more than 20 MB, for better performance, we recommend that you upload it to a cloud location that doesn't require authentication and pass the URL in the `path` parameter.\n \n- If you have a large amount of data (gigabytes), you might want to break up your data into multiple .zip files. You can load the first .zip file using this call and then load subsequent .zip files using PUT. See [Create Examples From Zip File](doc:create-examples-from-zip).\n\n- If you create a dataset from a .zip file, you can only add examples to it from a .zip file using PUT. See [Create Examples From Zip File](doc:create-examples-from-zip). You can't add a single example from a file.\n\n- The maximum number of labels per dataset is 250.\n\n- The .zip file must have a specific directory structure:\n - In the root, there should be a parent directory that contains subdirectories. \n - Each subdirectory below the parent directory becomes a label in the dataset. This subdirectory must contain images to be added to the dataset.\n - Each subdirectory below the parent directory should contain only images and not any nested subdirectories.\n\n\n- If the .zip file has an incorrect structure, the API returns an error: `FAILED: Invalid zip format provided for <dataset_name>`.\n\n- The maximum image file name length is 100 characters including the file extension. If the .zip file contains a file with a name greater than 100 characters (including the file extension), the example is created in the dataset, but the API truncates the example name to 100 characters.\n\n- The maximum directory name length is 20 characters. If the .zip file contains a directory with a name greater than 20 characters, the label is created in the dataset, but  the API truncates the label name to 20 characters.\n\n- Unicode characters aren't supported in .zip file names, directory names, or image file names. If you have any non-ASCII characters in any of these names, you'll see unpredictable names and training the dataset might fail. For example, if the .zip file contains an image file called HôtelDeVille.jpg, you might be able to upload the example, but the API returns an error when you try to train the dataset.\n\n- Image files must be smaller than 1 MB. If the .zip file contains image files larger than 1 MB, the image won't be loaded and no error is returned.\n\n- Images must be no larger than 2,000 pixels high by 2,000 pixels wide. You can upload images that are larger, but training the dataset might fail.\n\n- The supported image file types are PNG, JPG, and JPEG. If the .zip file contains any unsupported image file types, those images won't be uploaded and no error is returned.\n\n- In the case of duplicate image files in the .zip file, only the first file is uploaded. If there's more than one image file in the same directory, with the same name and the same file contents, only the first file is uploaded and the others are skipped.\n\n- If the .zip file contains an image file that has a name containing spaces, the spaces are removed from the file name before the file is uploaded. For example, if you have a file called `sandy beach.jpg` the example name becomes `sandybeach.jpg`.\n\n- When specifying the URL for a .zip file in a cloud drive service like Dropbox, be sure it's a link to the file and not a link to the interactive download page. For example, the URL should look like `https://www.dropbox.com/s/abcdxyz/mountainvsbeach.zip?dl=1`\n\n- If you create a dataset or upload images from a .zip file in Apex code, be sure that you reference the URL to the file with `https` and not `http`.\n\n##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`available`\",\n    \"0-1\": \"boolean\",\n    \"0-2\": \"Specifies whether the dataset is ready to be trained.\",\n    \"0-3\": \"1.0\",\n    \"2-0\": \"`id`\",\n    \"2-1\": \"long\",\n    \"2-2\": \"Dataset ID.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`labelSummary`\",\n    \"3-1\": \"object\",\n    \"3-2\": \"Contains the `labels` array that contains all the labels for the dataset. This is an asynchronous call, so the `labels` array is empty when you first create a dataset.\",\n    \"3-3\": \"1.0\",\n    \"4-0\": \"`name`\",\n    \"4-1\": \"string\",\n    \"4-2\": \"Name of the dataset. The API uses the name of the .zip file for the dataset name.\",\n    \"4-3\": \"1.0\",\n    \"5-0\": \"`object`\",\n    \"5-1\": \"string\",\n    \"5-2\": \"Object returned; in this case, `dataset`.\",\n    \"5-3\": \"1.0\",\n    \"7-0\": \"`totalExamples`\",\n    \"7-1\": \"int\",\n    \"7-2\": \"Total number of examples in the dataset.\",\n    \"7-3\": \"1.0\",\n    \"9-0\": \"`updatedAt`\",\n    \"9-1\": \"date\",\n    \"9-2\": \"Date and time that the dataset was last updated.\",\n    \"9-3\": \"1.0\",\n    \"1-0\": \"`createdAt`\",\n    \"1-1\": \"date\",\n    \"1-2\": \"Date and time that the dataset was created.\",\n    \"1-3\": \"1.0\",\n    \"6-0\": \"`statusMsg`\",\n    \"6-1\": \"string\",\n    \"6-2\": \"Status of the dataset creation and  data upload. Valid values are:\\n- `FAILURE: <failure_reason>`—Data upload has failed.\\n- `SUCCEEDED`—Data upload  is complete.\\n- `UPLOADING`—Data upload is in progress.\",\n    \"6-3\": \"1.0\",\n    \"8-0\": \"`type`\",\n    \"8-1\": \"string\",\n    \"8-2\": \"Type of dataset data. Default is `image`.\",\n    \"8-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 10\n}\n[/block]","category":"57dee8de84019d2000e95af3","createdAt":"2017-02-16T17:46:31.641Z","excerpt":"Creates a new dataset, labels, and examples from the specified .zip file. The call returns immediately and continues to upload the images in the background.","githubsync":"","hidden":false,"isReference":true,"link_external":false,"link_url":"","next":{"pages":[],"description":""},"order":1,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"create-a-dataset-zip-async","sync_unique":"","title":"Create a Dataset From a Zip File Asynchronously","type":"post","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

postCreate a Dataset From a Zip File Asynchronously

Creates a new dataset, labels, and examples from the specified .zip file. The call returns immediately and continues to upload the images in the background.

##Request Parameters## [block:parameters] { "data": { "0-0": "`data`", "1-0": "`path`", "0-1": "string", "1-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "Path to the .zip file on the local drive (FilePart). The maximum .zip file size you can upload from a local drive is 50 MB.", "0-3": "1.0", "1-2": "URL of the .zip file. The maximum .zip file size you can upload from a web location is 1 GB.", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block] You must provide the path to the .zip file on either the local machine or in the cloud. This API: - Creates a dataset that has the same name as the .zip file (limit is 100 characters). - Creates a label for each directory in the .zip file. The label name is the same name as the directory name (limit is 20 characters). - Creates an example for each image file in each directory in the .zip file. The example name is the same as the image file name. The API call is asynchronous, so you receive a dataset ID back immediately but the `available` value will be `false` and the `statusMsg` value will be `UPLOADING`. Use the dataset ID and make a call to [Get a Dataset](doc:get-a-dataset) to query when the upload is complete. When `available` is `true` and `statusMsg` is `SUCCEEDED`, the data upload is complete, and you can train the dataset to create a model. Keep the following points in mind when creating datasets. - If your .zip file is more than 20 MB, for better performance, we recommend that you upload it to a cloud location that doesn't require authentication and pass the URL in the `path` parameter. - If you have a large amount of data (gigabytes), you might want to break up your data into multiple .zip files. You can load the first .zip file using this call and then load subsequent .zip files using PUT. See [Create Examples From Zip File](doc:create-examples-from-zip). - If you create a dataset from a .zip file, you can only add examples to it from a .zip file using PUT. See [Create Examples From Zip File](doc:create-examples-from-zip). You can't add a single example from a file. - The maximum number of labels per dataset is 250. - The .zip file must have a specific directory structure: - In the root, there should be a parent directory that contains subdirectories. - Each subdirectory below the parent directory becomes a label in the dataset. This subdirectory must contain images to be added to the dataset. - Each subdirectory below the parent directory should contain only images and not any nested subdirectories. - If the .zip file has an incorrect structure, the API returns an error: `FAILED: Invalid zip format provided for <dataset_name>`. - The maximum image file name length is 100 characters including the file extension. If the .zip file contains a file with a name greater than 100 characters (including the file extension), the example is created in the dataset, but the API truncates the example name to 100 characters. - The maximum directory name length is 20 characters. If the .zip file contains a directory with a name greater than 20 characters, the label is created in the dataset, but the API truncates the label name to 20 characters. - Unicode characters aren't supported in .zip file names, directory names, or image file names. If you have any non-ASCII characters in any of these names, you'll see unpredictable names and training the dataset might fail. For example, if the .zip file contains an image file called HôtelDeVille.jpg, you might be able to upload the example, but the API returns an error when you try to train the dataset. - Image files must be smaller than 1 MB. If the .zip file contains image files larger than 1 MB, the image won't be loaded and no error is returned. - Images must be no larger than 2,000 pixels high by 2,000 pixels wide. You can upload images that are larger, but training the dataset might fail. - The supported image file types are PNG, JPG, and JPEG. If the .zip file contains any unsupported image file types, those images won't be uploaded and no error is returned. - In the case of duplicate image files in the .zip file, only the first file is uploaded. If there's more than one image file in the same directory, with the same name and the same file contents, only the first file is uploaded and the others are skipped. - If the .zip file contains an image file that has a name containing spaces, the spaces are removed from the file name before the file is uploaded. For example, if you have a file called `sandy beach.jpg` the example name becomes `sandybeach.jpg`. - When specifying the URL for a .zip file in a cloud drive service like Dropbox, be sure it's a link to the file and not a link to the interactive download page. For example, the URL should look like `https://www.dropbox.com/s/abcdxyz/mountainvsbeach.zip?dl=1` - If you create a dataset or upload images from a .zip file in Apex code, be sure that you reference the URL to the file with `https` and not `http`. ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`available`", "0-1": "boolean", "0-2": "Specifies whether the dataset is ready to be trained.", "0-3": "1.0", "2-0": "`id`", "2-1": "long", "2-2": "Dataset ID.", "2-3": "1.0", "3-0": "`labelSummary`", "3-1": "object", "3-2": "Contains the `labels` array that contains all the labels for the dataset. This is an asynchronous call, so the `labels` array is empty when you first create a dataset.", "3-3": "1.0", "4-0": "`name`", "4-1": "string", "4-2": "Name of the dataset. The API uses the name of the .zip file for the dataset name.", "4-3": "1.0", "5-0": "`object`", "5-1": "string", "5-2": "Object returned; in this case, `dataset`.", "5-3": "1.0", "7-0": "`totalExamples`", "7-1": "int", "7-2": "Total number of examples in the dataset.", "7-3": "1.0", "9-0": "`updatedAt`", "9-1": "date", "9-2": "Date and time that the dataset was last updated.", "9-3": "1.0", "1-0": "`createdAt`", "1-1": "date", "1-2": "Date and time that the dataset was created.", "1-3": "1.0", "6-0": "`statusMsg`", "6-1": "string", "6-2": "Status of the dataset creation and data upload. Valid values are:\n- `FAILURE: <failure_reason>`—Data upload has failed.\n- `SUCCEEDED`—Data upload is complete.\n- `UPLOADING`—Data upload is in progress.", "6-3": "1.0", "8-0": "`type`", "8-1": "string", "8-2": "Type of dataset data. Default is `image`.", "8-3": "1.0" }, "cols": 4, "rows": 10 } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



##Request Parameters## [block:parameters] { "data": { "0-0": "`data`", "1-0": "`path`", "0-1": "string", "1-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "Path to the .zip file on the local drive (FilePart). The maximum .zip file size you can upload from a local drive is 50 MB.", "0-3": "1.0", "1-2": "URL of the .zip file. The maximum .zip file size you can upload from a web location is 1 GB.", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block] You must provide the path to the .zip file on either the local machine or in the cloud. This API: - Creates a dataset that has the same name as the .zip file (limit is 100 characters). - Creates a label for each directory in the .zip file. The label name is the same name as the directory name (limit is 20 characters). - Creates an example for each image file in each directory in the .zip file. The example name is the same as the image file name. The API call is asynchronous, so you receive a dataset ID back immediately but the `available` value will be `false` and the `statusMsg` value will be `UPLOADING`. Use the dataset ID and make a call to [Get a Dataset](doc:get-a-dataset) to query when the upload is complete. When `available` is `true` and `statusMsg` is `SUCCEEDED`, the data upload is complete, and you can train the dataset to create a model. Keep the following points in mind when creating datasets. - If your .zip file is more than 20 MB, for better performance, we recommend that you upload it to a cloud location that doesn't require authentication and pass the URL in the `path` parameter. - If you have a large amount of data (gigabytes), you might want to break up your data into multiple .zip files. You can load the first .zip file using this call and then load subsequent .zip files using PUT. See [Create Examples From Zip File](doc:create-examples-from-zip). - If you create a dataset from a .zip file, you can only add examples to it from a .zip file using PUT. See [Create Examples From Zip File](doc:create-examples-from-zip). You can't add a single example from a file. - The maximum number of labels per dataset is 250. - The .zip file must have a specific directory structure: - In the root, there should be a parent directory that contains subdirectories. - Each subdirectory below the parent directory becomes a label in the dataset. This subdirectory must contain images to be added to the dataset. - Each subdirectory below the parent directory should contain only images and not any nested subdirectories. - If the .zip file has an incorrect structure, the API returns an error: `FAILED: Invalid zip format provided for <dataset_name>`. - The maximum image file name length is 100 characters including the file extension. If the .zip file contains a file with a name greater than 100 characters (including the file extension), the example is created in the dataset, but the API truncates the example name to 100 characters. - The maximum directory name length is 20 characters. If the .zip file contains a directory with a name greater than 20 characters, the label is created in the dataset, but the API truncates the label name to 20 characters. - Unicode characters aren't supported in .zip file names, directory names, or image file names. If you have any non-ASCII characters in any of these names, you'll see unpredictable names and training the dataset might fail. For example, if the .zip file contains an image file called HôtelDeVille.jpg, you might be able to upload the example, but the API returns an error when you try to train the dataset. - Image files must be smaller than 1 MB. If the .zip file contains image files larger than 1 MB, the image won't be loaded and no error is returned. - Images must be no larger than 2,000 pixels high by 2,000 pixels wide. You can upload images that are larger, but training the dataset might fail. - The supported image file types are PNG, JPG, and JPEG. If the .zip file contains any unsupported image file types, those images won't be uploaded and no error is returned. - In the case of duplicate image files in the .zip file, only the first file is uploaded. If there's more than one image file in the same directory, with the same name and the same file contents, only the first file is uploaded and the others are skipped. - If the .zip file contains an image file that has a name containing spaces, the spaces are removed from the file name before the file is uploaded. For example, if you have a file called `sandy beach.jpg` the example name becomes `sandybeach.jpg`. - When specifying the URL for a .zip file in a cloud drive service like Dropbox, be sure it's a link to the file and not a link to the interactive download page. For example, the URL should look like `https://www.dropbox.com/s/abcdxyz/mountainvsbeach.zip?dl=1` - If you create a dataset or upload images from a .zip file in Apex code, be sure that you reference the URL to the file with `https` and not `http`. ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`available`", "0-1": "boolean", "0-2": "Specifies whether the dataset is ready to be trained.", "0-3": "1.0", "2-0": "`id`", "2-1": "long", "2-2": "Dataset ID.", "2-3": "1.0", "3-0": "`labelSummary`", "3-1": "object", "3-2": "Contains the `labels` array that contains all the labels for the dataset. This is an asynchronous call, so the `labels` array is empty when you first create a dataset.", "3-3": "1.0", "4-0": "`name`", "4-1": "string", "4-2": "Name of the dataset. The API uses the name of the .zip file for the dataset name.", "4-3": "1.0", "5-0": "`object`", "5-1": "string", "5-2": "Object returned; in this case, `dataset`.", "5-3": "1.0", "7-0": "`totalExamples`", "7-1": "int", "7-2": "Total number of examples in the dataset.", "7-3": "1.0", "9-0": "`updatedAt`", "9-1": "date", "9-2": "Date and time that the dataset was last updated.", "9-3": "1.0", "1-0": "`createdAt`", "1-1": "date", "1-2": "Date and time that the dataset was created.", "1-3": "1.0", "6-0": "`statusMsg`", "6-1": "string", "6-2": "Status of the dataset creation and data upload. Valid values are:\n- `FAILURE: <failure_reason>`—Data upload has failed.\n- `SUCCEEDED`—Data upload is complete.\n- `UPLOADING`—Data upload is in progress.", "6-3": "1.0", "8-0": "`type`", "8-1": "string", "8-2": "Type of dataset data. Default is `image`.", "8-3": "1.0" }, "cols": 4, "rows": 10 } [/block]
{"__v":0,"_id":"58a621bc17b8231b0053a71c","api":{"auth":"required","examples":{"codes":[{"language":"curl","code":"curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"data=@C:\\Data\\mountainvsbeach.zip\"  https://api.metamind.io/v1/vision/datasets/upload/sync\n\ncurl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"path=http://metamind.io/images/mountainvsbeach.zip\"  https://api.metamind.io/v1/vision/datasets/upload/sync"}]},"method":"post","params":[],"results":{"codes":[{"name":"","code":"{\n  \"id\": 1000022,\n  \"name\": \"mountainvsbeach\",\n  \"createdAt\": \"2017-02-16T22:26:21.000+0000\",\n  \"updatedAt\": \"2017-02-16T22:26:21.000+0000\",\n  \"labelSummary\": {\n    \"labels\": [\n      {\n        \"id\": 1814,\n        \"datasetId\": 1000022,\n        \"name\": \"Mountains\",\n        \"numExamples\": 50\n      },\n      {\n        \"id\": 1815,\n        \"datasetId\": 1000022,\n        \"name\": \"Beaches\",\n        \"numExamples\": 49\n      }\n    ]\n  },\n  \"totalExamples\": 99,\n  \"totalLabels\": 2,\n  \"available\": true,\n  \"statusMsg\": \"SUCCEEDED\",\n  \"type\": \"image\",\n  \"object\": \"dataset\"\n}\n","language":"json","status":200},{"name":"","code":"{}","language":"json","status":400}]},"settings":"","url":"/vision/datasets/upload/sync"},"body":"##Request Parameters##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"`data`\",\n    \"1-0\": \"`path`\",\n    \"0-1\": \"string\",\n    \"1-1\": \"string\",\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-2\": \"Path to the .zip file on the local drive (FilePart). The maximum .zip file size you can upload from a local drive is 50 MB.\",\n    \"0-3\": \"1.0\",\n    \"1-2\": \"URL of the .zip file. The maximum .zip file size you can upload from a web location is 1 GB.\",\n    \"1-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 2\n}\n[/block]\nYou must provide the path to the .zip file on either the local machine or in the cloud. This API:\n- Creates a dataset that has the same name as the .zip file (limit is 100 characters).\n- Creates a label for each directory in the .zip file. The label name is the same name as the directory name (limit is 20 characters).\n- Creates an example for each image file in each directory in the .zip file. The example name is the same as the file name.\n\nThe API call is synchronous, so results are returned after the data has been uploaded to the dataset. If this call succeeds, it returns the `labels` array, `available` is `true`, and `statusMsg` is `SUCCEEDED`.\n\nKeep the following points in mind when creating datasets:\n- If your .zip file is more than 10 MB, we recommend that you use the asynchronous call to create a dataset. See [Create a Dataset From a Zip File Asynchronously](doc:create-a-dataset-zip-async).\n\n- If you have a large amount of data (gigabytes), you might want to break up your data into multiple .zip files. You can load the first .zip file using this call and then load subsequent .zip files using PUT. See [Create Examples From Zip File](doc:create-examples-from-zip).\n\n- If you create a dataset from a .zip file, you can only add examples to it from a .zip file using PUT. See [Create Examples From Zip File](doc:create-examples-from-zip). You can't add a single example from a file.\n\n- The maximum number of labels per dataset is 250.\n\n- The .zip file must have a specific directory structure:\n - In the root, there should be a parent directory that contains subdirectories. \n - Each subdirectory below the parent directory becomes a label in the dataset. This subdirectory must contain images to be added to the dataset.\n - Each subdirectory below the parent directory should contain only images and not any nested subdirectories.\n\n\n- If the .zip file has an incorrect structure, the API returns an error: `FAILED: Invalid zip format provided for <dataset_name>`.\n\n- The maximum image file name length is 100 characters including the file extension.  If the .zip file contains a file with a name greater than 100 characters (including the file extension), the example is created in the dataset but the API truncates the example name to 100 characters.\n\n- The maximum directory name length is 20 characters. If the .zip file contains a directory with a name greater than 20 characters, the label is created in the dataset, but  the API truncates the label name to 20 characters.\n\n- Unicode characters aren't supported in .zip file names, directory names, or image file names. If you have any non-ASCII characters in any of these names, you'll see unpredictable names and training the dataset might fail. For example, if the .zip file contains an image file called HôtelDeVille.jpg, you might be able to upload the example, but the API returns an error when you try to train the dataset.\n\n- Image files must be smaller than 1 MB. If the .zip file contains image files larger than 1 MB, the image won't be loaded and no error is returned.\n\n- Images must be no larger than 2,000 pixels high by 2,000 pixels wide. You can upload images that are larger, but training the dataset might fail.\n\n- The supported image file types are PNG, JPG, and JPEG. If the .zip file contains any unsupported image file types, those images won't be uploaded and no error is returned.\n\n- In the case of duplicate image files in the .zip file, only the first file is uploaded. If there's more than one image file in the same directory, with the same name and the same file contents, only the first file is uploaded and the others are skipped.\n\n- If the .zip file contains an image file that has a name containing spaces, the spaces are removed from the file name before the file is uploaded. For example, if you have a file called `sandy beach.jpg` the example name becomes `sandybeach.jpg`.\n\n- When specifying the URL for a .zip file in a cloud drive service like Dropbox, be sure it's a link to the file and not a link to the interactive download page. For example, the URL should look like `https://www.dropbox.com/s/abcdxyz/mountainvsbeach.zip?dl=1`\n\n- If you create a dataset or upload images from a .zip file in Apex code, be sure that you reference the URL to the file with `https` and not `http`.\n\n##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`available`\",\n    \"0-1\": \"boolean\",\n    \"0-2\": \"Specifies whether the dataset is ready to be trained.\",\n    \"0-3\": \"1.0\",\n    \"2-0\": \"`id`\",\n    \"2-1\": \"long\",\n    \"2-2\": \"Dataset ID.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`labelSummary`\",\n    \"3-1\": \"object\",\n    \"3-2\": \"Contains the `labels` array that contains all the labels for the dataset.\",\n    \"3-3\": \"1.0\",\n    \"4-0\": \"`name`\",\n    \"4-1\": \"string\",\n    \"4-2\": \"Name of the dataset. The API uses the name of the .zip file for the dataset name.\",\n    \"4-3\": \"1.0\",\n    \"5-0\": \"`object`\",\n    \"5-1\": \"string\",\n    \"5-2\": \"Object returned; in this case, `dataset`.\",\n    \"5-3\": \"1.0\",\n    \"7-0\": \"`totalExamples`\",\n    \"7-1\": \"int\",\n    \"7-2\": \"Total number of examples in the dataset.\",\n    \"7-3\": \"1.0\",\n    \"9-0\": \"`updatedAt`\",\n    \"9-1\": \"date\",\n    \"9-2\": \"Date and time that the dataset was last updated.\",\n    \"9-3\": \"1.0\",\n    \"1-0\": \"`createdAt`\",\n    \"1-1\": \"date\",\n    \"1-2\": \"Date and time that the dataset was created.\",\n    \"1-3\": \"1.0\",\n    \"6-0\": \"`statusMsg`\",\n    \"6-1\": \"string\",\n    \"6-2\": \"Status of the dataset creation and data upload. Valid values are:\\n- `FAILURE: <failure_reason>`—Data upload has failed.\\n- `SUCCEEDED`—Data upload is complete.\\n- `UPLOADING`—Data upload is in progress.\",\n    \"6-3\": \"1.0\",\n    \"8-0\": \"`type`\",\n    \"8-1\": \"string\",\n    \"8-2\": \"Type of dataset data. Default is `image`.\",\n    \"8-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 10\n}\n[/block]\n## Labels Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"`datasetId`\",\n    \"0-1\": \"long\",\n    \"0-2\": \"ID of the dataset that the label belongs to.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`id`\",\n    \"2-0\": \"`name`\",\n    \"3-0\": \"`numExamples`\",\n    \"1-1\": \"long\",\n    \"2-1\": \"string\",\n    \"3-1\": \"int\",\n    \"3-2\": \"Number of examples that have the label.\",\n    \"3-3\": \"1.0\",\n    \"2-2\": \"Name of the label.\",\n    \"2-3\": \"1.0\",\n    \"1-2\": \"ID of the label.\",\n    \"1-3\": \"1.0\",\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\"\n  },\n  \"cols\": 4,\n  \"rows\": 4\n}\n[/block]","category":"57dee8de84019d2000e95af3","createdAt":"2017-02-16T22:03:40.674Z","excerpt":"Creates a new dataset, labels, and examples from the specified .zip file. The call returns after the dataset is created and all of the images are uploaded. Use this API call for .zip files that are less than 10 MB.","githubsync":"","hidden":false,"isReference":true,"link_external":false,"link_url":"","next":{"pages":[],"description":""},"order":2,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"create-a-dataset-zip-sync","sync_unique":"","title":"Create a Dataset From a Zip File Synchronously","type":"post","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

postCreate a Dataset From a Zip File Synchronously

Creates a new dataset, labels, and examples from the specified .zip file. The call returns after the dataset is created and all of the images are uploaded. Use this API call for .zip files that are less than 10 MB.

##Request Parameters## [block:parameters] { "data": { "0-0": "`data`", "1-0": "`path`", "0-1": "string", "1-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "Path to the .zip file on the local drive (FilePart). The maximum .zip file size you can upload from a local drive is 50 MB.", "0-3": "1.0", "1-2": "URL of the .zip file. The maximum .zip file size you can upload from a web location is 1 GB.", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block] You must provide the path to the .zip file on either the local machine or in the cloud. This API: - Creates a dataset that has the same name as the .zip file (limit is 100 characters). - Creates a label for each directory in the .zip file. The label name is the same name as the directory name (limit is 20 characters). - Creates an example for each image file in each directory in the .zip file. The example name is the same as the file name. The API call is synchronous, so results are returned after the data has been uploaded to the dataset. If this call succeeds, it returns the `labels` array, `available` is `true`, and `statusMsg` is `SUCCEEDED`. Keep the following points in mind when creating datasets: - If your .zip file is more than 10 MB, we recommend that you use the asynchronous call to create a dataset. See [Create a Dataset From a Zip File Asynchronously](doc:create-a-dataset-zip-async). - If you have a large amount of data (gigabytes), you might want to break up your data into multiple .zip files. You can load the first .zip file using this call and then load subsequent .zip files using PUT. See [Create Examples From Zip File](doc:create-examples-from-zip). - If you create a dataset from a .zip file, you can only add examples to it from a .zip file using PUT. See [Create Examples From Zip File](doc:create-examples-from-zip). You can't add a single example from a file. - The maximum number of labels per dataset is 250. - The .zip file must have a specific directory structure: - In the root, there should be a parent directory that contains subdirectories. - Each subdirectory below the parent directory becomes a label in the dataset. This subdirectory must contain images to be added to the dataset. - Each subdirectory below the parent directory should contain only images and not any nested subdirectories. - If the .zip file has an incorrect structure, the API returns an error: `FAILED: Invalid zip format provided for <dataset_name>`. - The maximum image file name length is 100 characters including the file extension. If the .zip file contains a file with a name greater than 100 characters (including the file extension), the example is created in the dataset but the API truncates the example name to 100 characters. - The maximum directory name length is 20 characters. If the .zip file contains a directory with a name greater than 20 characters, the label is created in the dataset, but the API truncates the label name to 20 characters. - Unicode characters aren't supported in .zip file names, directory names, or image file names. If you have any non-ASCII characters in any of these names, you'll see unpredictable names and training the dataset might fail. For example, if the .zip file contains an image file called HôtelDeVille.jpg, you might be able to upload the example, but the API returns an error when you try to train the dataset. - Image files must be smaller than 1 MB. If the .zip file contains image files larger than 1 MB, the image won't be loaded and no error is returned. - Images must be no larger than 2,000 pixels high by 2,000 pixels wide. You can upload images that are larger, but training the dataset might fail. - The supported image file types are PNG, JPG, and JPEG. If the .zip file contains any unsupported image file types, those images won't be uploaded and no error is returned. - In the case of duplicate image files in the .zip file, only the first file is uploaded. If there's more than one image file in the same directory, with the same name and the same file contents, only the first file is uploaded and the others are skipped. - If the .zip file contains an image file that has a name containing spaces, the spaces are removed from the file name before the file is uploaded. For example, if you have a file called `sandy beach.jpg` the example name becomes `sandybeach.jpg`. - When specifying the URL for a .zip file in a cloud drive service like Dropbox, be sure it's a link to the file and not a link to the interactive download page. For example, the URL should look like `https://www.dropbox.com/s/abcdxyz/mountainvsbeach.zip?dl=1` - If you create a dataset or upload images from a .zip file in Apex code, be sure that you reference the URL to the file with `https` and not `http`. ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`available`", "0-1": "boolean", "0-2": "Specifies whether the dataset is ready to be trained.", "0-3": "1.0", "2-0": "`id`", "2-1": "long", "2-2": "Dataset ID.", "2-3": "1.0", "3-0": "`labelSummary`", "3-1": "object", "3-2": "Contains the `labels` array that contains all the labels for the dataset.", "3-3": "1.0", "4-0": "`name`", "4-1": "string", "4-2": "Name of the dataset. The API uses the name of the .zip file for the dataset name.", "4-3": "1.0", "5-0": "`object`", "5-1": "string", "5-2": "Object returned; in this case, `dataset`.", "5-3": "1.0", "7-0": "`totalExamples`", "7-1": "int", "7-2": "Total number of examples in the dataset.", "7-3": "1.0", "9-0": "`updatedAt`", "9-1": "date", "9-2": "Date and time that the dataset was last updated.", "9-3": "1.0", "1-0": "`createdAt`", "1-1": "date", "1-2": "Date and time that the dataset was created.", "1-3": "1.0", "6-0": "`statusMsg`", "6-1": "string", "6-2": "Status of the dataset creation and data upload. Valid values are:\n- `FAILURE: <failure_reason>`—Data upload has failed.\n- `SUCCEEDED`—Data upload is complete.\n- `UPLOADING`—Data upload is in progress.", "6-3": "1.0", "8-0": "`type`", "8-1": "string", "8-2": "Type of dataset data. Default is `image`.", "8-3": "1.0" }, "cols": 4, "rows": 10 } [/block] ## Labels Response Body## [block:parameters] { "data": { "0-0": "`datasetId`", "0-1": "long", "0-2": "ID of the dataset that the label belongs to.", "0-3": "1.0", "1-0": "`id`", "2-0": "`name`", "3-0": "`numExamples`", "1-1": "long", "2-1": "string", "3-1": "int", "3-2": "Number of examples that have the label.", "3-3": "1.0", "2-2": "Name of the label.", "2-3": "1.0", "1-2": "ID of the label.", "1-3": "1.0", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version" }, "cols": 4, "rows": 4 } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



##Request Parameters## [block:parameters] { "data": { "0-0": "`data`", "1-0": "`path`", "0-1": "string", "1-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "Path to the .zip file on the local drive (FilePart). The maximum .zip file size you can upload from a local drive is 50 MB.", "0-3": "1.0", "1-2": "URL of the .zip file. The maximum .zip file size you can upload from a web location is 1 GB.", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block] You must provide the path to the .zip file on either the local machine or in the cloud. This API: - Creates a dataset that has the same name as the .zip file (limit is 100 characters). - Creates a label for each directory in the .zip file. The label name is the same name as the directory name (limit is 20 characters). - Creates an example for each image file in each directory in the .zip file. The example name is the same as the file name. The API call is synchronous, so results are returned after the data has been uploaded to the dataset. If this call succeeds, it returns the `labels` array, `available` is `true`, and `statusMsg` is `SUCCEEDED`. Keep the following points in mind when creating datasets: - If your .zip file is more than 10 MB, we recommend that you use the asynchronous call to create a dataset. See [Create a Dataset From a Zip File Asynchronously](doc:create-a-dataset-zip-async). - If you have a large amount of data (gigabytes), you might want to break up your data into multiple .zip files. You can load the first .zip file using this call and then load subsequent .zip files using PUT. See [Create Examples From Zip File](doc:create-examples-from-zip). - If you create a dataset from a .zip file, you can only add examples to it from a .zip file using PUT. See [Create Examples From Zip File](doc:create-examples-from-zip). You can't add a single example from a file. - The maximum number of labels per dataset is 250. - The .zip file must have a specific directory structure: - In the root, there should be a parent directory that contains subdirectories. - Each subdirectory below the parent directory becomes a label in the dataset. This subdirectory must contain images to be added to the dataset. - Each subdirectory below the parent directory should contain only images and not any nested subdirectories. - If the .zip file has an incorrect structure, the API returns an error: `FAILED: Invalid zip format provided for <dataset_name>`. - The maximum image file name length is 100 characters including the file extension. If the .zip file contains a file with a name greater than 100 characters (including the file extension), the example is created in the dataset but the API truncates the example name to 100 characters. - The maximum directory name length is 20 characters. If the .zip file contains a directory with a name greater than 20 characters, the label is created in the dataset, but the API truncates the label name to 20 characters. - Unicode characters aren't supported in .zip file names, directory names, or image file names. If you have any non-ASCII characters in any of these names, you'll see unpredictable names and training the dataset might fail. For example, if the .zip file contains an image file called HôtelDeVille.jpg, you might be able to upload the example, but the API returns an error when you try to train the dataset. - Image files must be smaller than 1 MB. If the .zip file contains image files larger than 1 MB, the image won't be loaded and no error is returned. - Images must be no larger than 2,000 pixels high by 2,000 pixels wide. You can upload images that are larger, but training the dataset might fail. - The supported image file types are PNG, JPG, and JPEG. If the .zip file contains any unsupported image file types, those images won't be uploaded and no error is returned. - In the case of duplicate image files in the .zip file, only the first file is uploaded. If there's more than one image file in the same directory, with the same name and the same file contents, only the first file is uploaded and the others are skipped. - If the .zip file contains an image file that has a name containing spaces, the spaces are removed from the file name before the file is uploaded. For example, if you have a file called `sandy beach.jpg` the example name becomes `sandybeach.jpg`. - When specifying the URL for a .zip file in a cloud drive service like Dropbox, be sure it's a link to the file and not a link to the interactive download page. For example, the URL should look like `https://www.dropbox.com/s/abcdxyz/mountainvsbeach.zip?dl=1` - If you create a dataset or upload images from a .zip file in Apex code, be sure that you reference the URL to the file with `https` and not `http`. ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`available`", "0-1": "boolean", "0-2": "Specifies whether the dataset is ready to be trained.", "0-3": "1.0", "2-0": "`id`", "2-1": "long", "2-2": "Dataset ID.", "2-3": "1.0", "3-0": "`labelSummary`", "3-1": "object", "3-2": "Contains the `labels` array that contains all the labels for the dataset.", "3-3": "1.0", "4-0": "`name`", "4-1": "string", "4-2": "Name of the dataset. The API uses the name of the .zip file for the dataset name.", "4-3": "1.0", "5-0": "`object`", "5-1": "string", "5-2": "Object returned; in this case, `dataset`.", "5-3": "1.0", "7-0": "`totalExamples`", "7-1": "int", "7-2": "Total number of examples in the dataset.", "7-3": "1.0", "9-0": "`updatedAt`", "9-1": "date", "9-2": "Date and time that the dataset was last updated.", "9-3": "1.0", "1-0": "`createdAt`", "1-1": "date", "1-2": "Date and time that the dataset was created.", "1-3": "1.0", "6-0": "`statusMsg`", "6-1": "string", "6-2": "Status of the dataset creation and data upload. Valid values are:\n- `FAILURE: <failure_reason>`—Data upload has failed.\n- `SUCCEEDED`—Data upload is complete.\n- `UPLOADING`—Data upload is in progress.", "6-3": "1.0", "8-0": "`type`", "8-1": "string", "8-2": "Type of dataset data. Default is `image`.", "8-3": "1.0" }, "cols": 4, "rows": 10 } [/block] ## Labels Response Body## [block:parameters] { "data": { "0-0": "`datasetId`", "0-1": "long", "0-2": "ID of the dataset that the label belongs to.", "0-3": "1.0", "1-0": "`id`", "2-0": "`name`", "3-0": "`numExamples`", "1-1": "long", "2-1": "string", "3-1": "int", "3-2": "Number of examples that have the label.", "3-3": "1.0", "2-2": "Name of the label.", "2-3": "1.0", "1-2": "ID of the label.", "1-3": "1.0", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version" }, "cols": 4, "rows": 4 } [/block]
{"__v":8,"_id":"57e031ea80aef10e00899160","api":{"auth":"required","examples":{"codes":[{"language":"curl","code":"curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=Beach and Mountain\" -F \"labels=beach,mountain\" https://api.metamind.io/v1/vision/datasets"}]},"method":"post","params":[],"results":{"codes":[{"name":"","code":"{\n  \"id\": 57,\n  \"name\": \"Beach and Mountain\",\n  \"createdAt\": \"2016-09-15T16:51:41.000+0000\",\n  \"updatedAt\": \"2016-09-15T16:51:41.000+0000\",\n  \"labelSummary\": {\n    \"labels\": [\n      {\n        \"id\": 611,\n        \"datasetId\": 57,\n        \"name\": \"beach\",\n        \"numExamples\": 0\n      },\n    {\n        \"id\": 612,\n        \"datasetId\": 57,\n        \"name\": \"mountain\",\n        \"numExamples\": 0\n      }\n          ]\n  },\n  \"totalExamples\": 0,\n  \"totalLabels\": 2,\n  \"available\": true,\n  \"statusMsg\": \"SUCCEEDED\",\n  \"type\": \"image\",\n  \"object\": \"dataset\"\n}\n","language":"json","status":200},{"name":"","code":"{}","language":"json","status":400}]},"settings":"","url":"/vision/datasets"},"body":"[block:callout]\n{\n  \"type\": \"warning\",\n  \"title\": \"Warning\",\n  \"body\": \"For better performance, we recommend that you create a dataset by uploading a .zip file. See [Create a Dataset From a Zip File Asynchronously](doc:create-a-dataset-zip-async).\"\n}\n[/block]\n##Request Parameters##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"`labels`\",\n    \"1-0\": \"`name`\",\n    \"0-1\": \"string\",\n    \"1-1\": \"string\",\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-2\": \"Optional comma-separated list of labels. If specified, creates the labels in the dataset. Maximum number of labels per dataset is 250.\",\n    \"0-3\": \"1.0\",\n    \"1-2\": \"Name of the dataset. Maximum length is 180 characters.\",\n    \"1-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 2\n}\n[/block]\nKeep the following points in mind when creating datasets.\n- If you pass labels in when you create a dataset, the label names can’t contain a comma. If you’re adding a label that contains a comma, use the call to create a single label. See [Create a Label](doc:create-a-label). \n- Unicode characters aren't supported in the dataset name or in label names. For example, if you create a dataset with the name `Hôtel` it converts the unicode to ASCII.\n\n##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"1-0\": \"`createdAt`\",\n    \"1-1\": \"date\",\n    \"1-2\": \"Date and time that the dataset was created.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`id`\",\n    \"2-1\": \"long\",\n    \"2-2\": \"Dataset ID.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`labelSummary`\",\n    \"3-1\": \"object\",\n    \"3-2\": \"Contains the `labels` array that contains all the labels for the dataset.\",\n    \"3-3\": \"1.0\",\n    \"4-0\": \"`name`\",\n    \"4-1\": \"string\",\n    \"4-2\": \"Name of the dataset.\",\n    \"4-3\": \"1.0\",\n    \"5-0\": \"`object`\",\n    \"5-1\": \"string\",\n    \"5-2\": \"Object returned; in this case, `dataset`.\",\n    \"5-3\": \"1.0\",\n    \"7-0\": \"`totalExamples`\",\n    \"7-1\": \"int\",\n    \"7-2\": \"Total number of examples in the dataset.\",\n    \"7-3\": \"1.0\",\n    \"8-0\": \"`totalLabels`\",\n    \"8-1\": \"int\",\n    \"8-2\": \"Total number of labels in the dataset.\",\n    \"8-3\": \"1.0\",\n    \"10-0\": \"`updatedAt`\",\n    \"10-1\": \"date\",\n    \"10-2\": \"Date and time that the dataset was last updated.\",\n    \"10-3\": \"1.0\",\n    \"0-0\": \"`available`\",\n    \"0-1\": \"boolean\",\n    \"0-2\": \"Specifies whether the dataset is ready to be trained.\",\n    \"0-3\": \"1.0\",\n    \"6-0\": \"`statusMsg`\",\n    \"6-1\": \"string\",\n    \"6-3\": \"1.0\",\n    \"6-2\": \"Status of the dataset creation and data upload. Valid values are:\\n- `FAILURE: <failure_reason>`—Data upload has failed.\\n- `SUCCEEDED`—Data upload is complete.\\n- `UPLOADING`—Data upload is in progress.\",\n    \"9-0\": \"`type`\",\n    \"9-1\": \"string\",\n    \"9-2\": \"Type of dataset data. Default is `image`.\",\n    \"9-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 11\n}\n[/block]\n## Labels Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`datasetId`\",\n    \"0-1\": \"long\",\n    \"0-2\": \"ID of the dataset that the label belongs to.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`id`\",\n    \"1-1\": \"long\",\n    \"1-2\": \"ID of the label.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`name`\",\n    \"2-1\": \"string\",\n    \"2-2\": \"Name of the label.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`numExamples`\",\n    \"3-1\": \"int\",\n    \"3-2\": \"Number of examples that have the label.\",\n    \"3-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 4\n}\n[/block]","category":"57dee8de84019d2000e95af3","createdAt":"2016-09-19T18:43:54.306Z","excerpt":"Creates a new dataset and labels, if they're specified.","githubsync":"","hidden":false,"isReference":true,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":3,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"create-a-dataset","sync_unique":"","title":"Create a Dataset","type":"post","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

postCreate a Dataset

Creates a new dataset and labels, if they're specified.

[block:callout] { "type": "warning", "title": "Warning", "body": "For better performance, we recommend that you create a dataset by uploading a .zip file. See [Create a Dataset From a Zip File Asynchronously](doc:create-a-dataset-zip-async)." } [/block] ##Request Parameters## [block:parameters] { "data": { "0-0": "`labels`", "1-0": "`name`", "0-1": "string", "1-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "Optional comma-separated list of labels. If specified, creates the labels in the dataset. Maximum number of labels per dataset is 250.", "0-3": "1.0", "1-2": "Name of the dataset. Maximum length is 180 characters.", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block] Keep the following points in mind when creating datasets. - If you pass labels in when you create a dataset, the label names can’t contain a comma. If you’re adding a label that contains a comma, use the call to create a single label. See [Create a Label](doc:create-a-label). - Unicode characters aren't supported in the dataset name or in label names. For example, if you create a dataset with the name `Hôtel` it converts the unicode to ASCII. ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`createdAt`", "1-1": "date", "1-2": "Date and time that the dataset was created.", "1-3": "1.0", "2-0": "`id`", "2-1": "long", "2-2": "Dataset ID.", "2-3": "1.0", "3-0": "`labelSummary`", "3-1": "object", "3-2": "Contains the `labels` array that contains all the labels for the dataset.", "3-3": "1.0", "4-0": "`name`", "4-1": "string", "4-2": "Name of the dataset.", "4-3": "1.0", "5-0": "`object`", "5-1": "string", "5-2": "Object returned; in this case, `dataset`.", "5-3": "1.0", "7-0": "`totalExamples`", "7-1": "int", "7-2": "Total number of examples in the dataset.", "7-3": "1.0", "8-0": "`totalLabels`", "8-1": "int", "8-2": "Total number of labels in the dataset.", "8-3": "1.0", "10-0": "`updatedAt`", "10-1": "date", "10-2": "Date and time that the dataset was last updated.", "10-3": "1.0", "0-0": "`available`", "0-1": "boolean", "0-2": "Specifies whether the dataset is ready to be trained.", "0-3": "1.0", "6-0": "`statusMsg`", "6-1": "string", "6-3": "1.0", "6-2": "Status of the dataset creation and data upload. Valid values are:\n- `FAILURE: <failure_reason>`—Data upload has failed.\n- `SUCCEEDED`—Data upload is complete.\n- `UPLOADING`—Data upload is in progress.", "9-0": "`type`", "9-1": "string", "9-2": "Type of dataset data. Default is `image`.", "9-3": "1.0" }, "cols": 4, "rows": 11 } [/block] ## Labels Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "0-2": "ID of the dataset that the label belongs to.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "ID of the label.", "1-3": "1.0", "2-0": "`name`", "2-1": "string", "2-2": "Name of the label.", "2-3": "1.0", "3-0": "`numExamples`", "3-1": "int", "3-2": "Number of examples that have the label.", "3-3": "1.0" }, "cols": 4, "rows": 4 } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



[block:callout] { "type": "warning", "title": "Warning", "body": "For better performance, we recommend that you create a dataset by uploading a .zip file. See [Create a Dataset From a Zip File Asynchronously](doc:create-a-dataset-zip-async)." } [/block] ##Request Parameters## [block:parameters] { "data": { "0-0": "`labels`", "1-0": "`name`", "0-1": "string", "1-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "Optional comma-separated list of labels. If specified, creates the labels in the dataset. Maximum number of labels per dataset is 250.", "0-3": "1.0", "1-2": "Name of the dataset. Maximum length is 180 characters.", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block] Keep the following points in mind when creating datasets. - If you pass labels in when you create a dataset, the label names can’t contain a comma. If you’re adding a label that contains a comma, use the call to create a single label. See [Create a Label](doc:create-a-label). - Unicode characters aren't supported in the dataset name or in label names. For example, if you create a dataset with the name `Hôtel` it converts the unicode to ASCII. ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`createdAt`", "1-1": "date", "1-2": "Date and time that the dataset was created.", "1-3": "1.0", "2-0": "`id`", "2-1": "long", "2-2": "Dataset ID.", "2-3": "1.0", "3-0": "`labelSummary`", "3-1": "object", "3-2": "Contains the `labels` array that contains all the labels for the dataset.", "3-3": "1.0", "4-0": "`name`", "4-1": "string", "4-2": "Name of the dataset.", "4-3": "1.0", "5-0": "`object`", "5-1": "string", "5-2": "Object returned; in this case, `dataset`.", "5-3": "1.0", "7-0": "`totalExamples`", "7-1": "int", "7-2": "Total number of examples in the dataset.", "7-3": "1.0", "8-0": "`totalLabels`", "8-1": "int", "8-2": "Total number of labels in the dataset.", "8-3": "1.0", "10-0": "`updatedAt`", "10-1": "date", "10-2": "Date and time that the dataset was last updated.", "10-3": "1.0", "0-0": "`available`", "0-1": "boolean", "0-2": "Specifies whether the dataset is ready to be trained.", "0-3": "1.0", "6-0": "`statusMsg`", "6-1": "string", "6-3": "1.0", "6-2": "Status of the dataset creation and data upload. Valid values are:\n- `FAILURE: <failure_reason>`—Data upload has failed.\n- `SUCCEEDED`—Data upload is complete.\n- `UPLOADING`—Data upload is in progress.", "9-0": "`type`", "9-1": "string", "9-2": "Type of dataset data. Default is `image`.", "9-3": "1.0" }, "cols": 4, "rows": 11 } [/block] ## Labels Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "0-2": "ID of the dataset that the label belongs to.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "ID of the label.", "1-3": "1.0", "2-0": "`name`", "2-1": "string", "2-2": "Name of the label.", "2-3": "1.0", "3-0": "`numExamples`", "3-1": "int", "3-2": "Number of examples that have the label.", "3-3": "1.0" }, "cols": 4, "rows": 4 } [/block]
{"__v":3,"_id":"57e5966d8c679e220074d9ab","api":{"auth":"required","examples":{"codes":[{"code":"curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/datasets/57","language":"curl"}]},"method":"get","params":[],"results":{"codes":[{"status":200,"language":"json","code":"{\n  \"id\": 57,\n  \"name\": \"Beach and Mountain\",\n  \"createdAt\": \"2016-09-15T16:51:41.000+0000\",\n  \"updatedAt\": \"2016-09-15T16:51:41.000+0000\",\n  \"labelSummary\": {\n    \"labels\": [\n      {\n        \"id\": 612,\n        \"datasetId\": 57,\n        \"name\": \"beach\",\n        \"numExamples\": 49\n      },\n      {\n        \"id\": 611,\n        \"datasetId\": 57,\n        \"name\": \"mountain\",\n        \"numExamples\": 50\n      }\n    ]\n  },\n  \"totalExamples\": 99,\n  \"totalLabels\": 2,\n  \"available\": true,\n  \"statusMsg\": \"SUCCEEDED\",\n  \"type\": \"image\",\n  \"object\": \"dataset\"\n}\n","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","url":"/vision/datasets/<DATASET_ID>"},"body":"##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"1-0\": \"`createdAt`\",\n    \"1-1\": \"date\",\n    \"1-2\": \"Date and time that the dataset was created.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`id`\",\n    \"2-1\": \"long\",\n    \"2-2\": \"Dataset ID.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`labelSummary`\",\n    \"3-1\": \"object\",\n    \"3-2\": \"Contains the `labels` array that contains all the labels for the dataset.\",\n    \"3-3\": \"1.0\",\n    \"4-0\": \"`name`\",\n    \"4-1\": \"string\",\n    \"4-2\": \"Name of the dataset.\",\n    \"4-3\": \"1.0\",\n    \"5-0\": \"`object`\",\n    \"5-1\": \"string\",\n    \"5-2\": \"Object returned; in this case, `dataset`.\",\n    \"5-3\": \"1.0\",\n    \"7-0\": \"`totalExamples`\",\n    \"7-1\": \"int\",\n    \"7-2\": \"Total number of examples in the dataset.\",\n    \"7-3\": \"1.0\",\n    \"8-0\": \"`totalLabels`\",\n    \"8-1\": \"int\",\n    \"8-2\": \"Total number of labels in the dataset.\",\n    \"8-3\": \"1.0\",\n    \"10-0\": \"`updatedAt`\",\n    \"10-1\": \"date\",\n    \"10-2\": \"Date and time that the dataset was last updated.\",\n    \"10-3\": \"1.0\",\n    \"0-0\": \"`available`\",\n    \"0-1\": \"boolean\",\n    \"0-2\": \"Specifies whether the dataset is ready to be trained.\",\n    \"0-3\": \"1.0\",\n    \"6-0\": \"`statusMsg`\",\n    \"6-1\": \"string\",\n    \"6-2\": \"Status of the dataset creation and data upload. Valid values are:\\n- `FAILURE: <failure_reason>`—Data upload has failed.\\n- `SUCCEEDED`—Data upload is complete.\\n- `UPLOADING`—Data upload is in progress.\",\n    \"6-3\": \"1.0\",\n    \"9-0\": \"`type`\",\n    \"9-1\": \"string\",\n    \"9-2\": \"Type of dataset data. Default is `image`.\",\n    \"9-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 11\n}\n[/block]\n##Labels Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`datasetId`\",\n    \"0-1\": \"long\",\n    \"0-2\": \"ID of the dataset that the label belongs to.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`id`\",\n    \"1-1\": \"long\",\n    \"1-2\": \"ID of the label.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`name`\",\n    \"2-1\": \"string\",\n    \"2-2\": \"Name of the label.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`numExamples`\",\n    \"3-1\": \"int\",\n    \"3-2\": \"Number of examples that have the label.\",\n    \"3-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 4\n}\n[/block]","category":"57dee8de84019d2000e95af3","createdAt":"2016-09-23T20:54:05.059Z","excerpt":"Returns a single dataset.","githubsync":"","hidden":false,"isReference":true,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":4,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"get-a-dataset","sync_unique":"","title":"Get a Dataset","type":"get","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

getGet a Dataset

Returns a single dataset.

##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`createdAt`", "1-1": "date", "1-2": "Date and time that the dataset was created.", "1-3": "1.0", "2-0": "`id`", "2-1": "long", "2-2": "Dataset ID.", "2-3": "1.0", "3-0": "`labelSummary`", "3-1": "object", "3-2": "Contains the `labels` array that contains all the labels for the dataset.", "3-3": "1.0", "4-0": "`name`", "4-1": "string", "4-2": "Name of the dataset.", "4-3": "1.0", "5-0": "`object`", "5-1": "string", "5-2": "Object returned; in this case, `dataset`.", "5-3": "1.0", "7-0": "`totalExamples`", "7-1": "int", "7-2": "Total number of examples in the dataset.", "7-3": "1.0", "8-0": "`totalLabels`", "8-1": "int", "8-2": "Total number of labels in the dataset.", "8-3": "1.0", "10-0": "`updatedAt`", "10-1": "date", "10-2": "Date and time that the dataset was last updated.", "10-3": "1.0", "0-0": "`available`", "0-1": "boolean", "0-2": "Specifies whether the dataset is ready to be trained.", "0-3": "1.0", "6-0": "`statusMsg`", "6-1": "string", "6-2": "Status of the dataset creation and data upload. Valid values are:\n- `FAILURE: <failure_reason>`—Data upload has failed.\n- `SUCCEEDED`—Data upload is complete.\n- `UPLOADING`—Data upload is in progress.", "6-3": "1.0", "9-0": "`type`", "9-1": "string", "9-2": "Type of dataset data. Default is `image`.", "9-3": "1.0" }, "cols": 4, "rows": 11 } [/block] ##Labels Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "0-2": "ID of the dataset that the label belongs to.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "ID of the label.", "1-3": "1.0", "2-0": "`name`", "2-1": "string", "2-2": "Name of the label.", "2-3": "1.0", "3-0": "`numExamples`", "3-1": "int", "3-2": "Number of examples that have the label.", "3-3": "1.0" }, "cols": 4, "rows": 4 } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`createdAt`", "1-1": "date", "1-2": "Date and time that the dataset was created.", "1-3": "1.0", "2-0": "`id`", "2-1": "long", "2-2": "Dataset ID.", "2-3": "1.0", "3-0": "`labelSummary`", "3-1": "object", "3-2": "Contains the `labels` array that contains all the labels for the dataset.", "3-3": "1.0", "4-0": "`name`", "4-1": "string", "4-2": "Name of the dataset.", "4-3": "1.0", "5-0": "`object`", "5-1": "string", "5-2": "Object returned; in this case, `dataset`.", "5-3": "1.0", "7-0": "`totalExamples`", "7-1": "int", "7-2": "Total number of examples in the dataset.", "7-3": "1.0", "8-0": "`totalLabels`", "8-1": "int", "8-2": "Total number of labels in the dataset.", "8-3": "1.0", "10-0": "`updatedAt`", "10-1": "date", "10-2": "Date and time that the dataset was last updated.", "10-3": "1.0", "0-0": "`available`", "0-1": "boolean", "0-2": "Specifies whether the dataset is ready to be trained.", "0-3": "1.0", "6-0": "`statusMsg`", "6-1": "string", "6-2": "Status of the dataset creation and data upload. Valid values are:\n- `FAILURE: <failure_reason>`—Data upload has failed.\n- `SUCCEEDED`—Data upload is complete.\n- `UPLOADING`—Data upload is in progress.", "6-3": "1.0", "9-0": "`type`", "9-1": "string", "9-2": "Type of dataset data. Default is `image`.", "9-3": "1.0" }, "cols": 4, "rows": 11 } [/block] ##Labels Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "0-2": "ID of the dataset that the label belongs to.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "ID of the label.", "1-3": "1.0", "2-0": "`name`", "2-1": "string", "2-2": "Name of the label.", "2-3": "1.0", "3-0": "`numExamples`", "3-1": "int", "3-2": "Number of examples that have the label.", "3-3": "1.0" }, "cols": 4, "rows": 4 } [/block]
{"__v":1,"_id":"57e5a44b00c8680e00fae832","api":{"auth":"required","examples":{"codes":[{"code":"curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/datasets","language":"curl"}]},"method":"get","params":[],"results":{"codes":[{"status":200,"language":"json","code":"{\n  \"object\": \"list\",\n  \"data\": [\n    {\n      \"id\": 57,\n      \"name\": \"Beach and Mountain\",\n      \"updatedAt\": \"2016-09-09T22:39:22.000+0000\",\n      \"createdAt\": \"2016-09-09T22:39:22.000+0000\",\n     \"labelSummary\": {\n           \"labels\": [\n          {\n              \"id\": 36,\n              \"datasetId\": 57,\n              \"name\": \"beach\",\n              \"numExamples\": 49\n          },\n          {\n            \"id\": 37,\n            \"datasetId\": 57,\n            \"name\": \"mountain\",\n            \"numExamples\": 50\n          }\n        ]\n      },\n      \"totalExamples\": 99,\n      \"totalLabels\": 2,\n      \"available\": true,\n      \"statusMsg\": \"SUCCEEDED\",\n      \"type\": \"image\",\n      \"object\": \"dataset\"\n    },\n    {\n      \"id\": 58,\n      \"name\": \"Brain Scans\",\n      \"updatedAt\": \"2016-09-24T21:35:27.000+0000\",\n      \"createdAt\": \"2016-09-24T21:35:27.000+0000\",\n     \"labelSummary\": {\n           \"labels\": [\n          {\n              \"id\": 122,\n              \"datasetId\": 58,\n              \"name\": \"healthy\",\n              \"numExamples\": 5064\n          },\n          {\n            \"id\": 123,\n            \"datasetId\": 58,\n            \"name\": \"unhealthy\",\n            \"numExamples\": 5080\n          }\n      ]\n     },\n      \"totalExamples\": 10144,\n      \"totalLabels\": 2,\n      \"available\": true,\n      \"statusMsg\": \"SUCCEEDED\",\n      \"type\": \"image\",\n      \"object\": \"dataset\"\n    }\n  ]\n}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","url":"/vision/datasets"},"body":"##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`data`\",\n    \"0-1\": \"array\",\n    \"0-2\": \"Array of `dataset` objects.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`object`\",\n    \"1-1\": \"string\",\n    \"1-2\": \"Object returned; in this case, `list`.\",\n    \"1-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 2\n}\n[/block]\n##Dataset Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"1-0\": \"`createdAt`\",\n    \"1-1\": \"date\",\n    \"1-2\": \"Date and time that the dataset was created.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`id`\",\n    \"2-1\": \"long\",\n    \"2-2\": \"Dataset ID.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`labelSummary`\",\n    \"3-1\": \"object\",\n    \"3-2\": \"Contains the `labels` array that contains all the labels for the dataset.\",\n    \"3-3\": \"1.0\",\n    \"4-0\": \"`name`\",\n    \"4-1\": \"string\",\n    \"4-2\": \"Name of the dataset.\",\n    \"4-3\": \"1.0\",\n    \"5-0\": \"`object`\",\n    \"5-1\": \"string\",\n    \"5-2\": \"Object returned; in this case, `dataset`.\",\n    \"5-3\": \"1.0\",\n    \"9-0\": \"`updatedAt`\",\n    \"9-1\": \"date\",\n    \"9-2\": \"Date and time that the dataset was last updated.\",\n    \"9-3\": \"1.0\",\n    \"0-0\": \"`available`\",\n    \"0-1\": \"boolean\",\n    \"0-2\": \"Specifies whether the dataset is ready to be trained.\",\n    \"0-3\": \"1.0\",\n    \"6-0\": \"`totalExamples`\",\n    \"6-1\": \"int\",\n    \"6-2\": \"Total number of examples in the dataset.\",\n    \"6-3\": \"1.0\",\n    \"7-0\": \"`totalLabels`\",\n    \"7-1\": \"int\",\n    \"7-2\": \"Total number of labels in the dataset.\",\n    \"7-3\": \"1.0\",\n    \"8-0\": \"`type`\",\n    \"8-1\": \"string\",\n    \"8-2\": \"Type of dataset data. Default is `image`.\",\n    \"8-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 10\n}\n[/block]\n##Labels Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`datasetId`\",\n    \"0-1\": \"long\",\n    \"0-2\": \"ID of the dataset that the label belongs to.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`id`\",\n    \"1-1\": \"long\",\n    \"1-2\": \"ID of the label.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`name`\",\n    \"2-1\": \"string\",\n    \"2-2\": \"Name of the label.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`numExamples`\",\n    \"3-1\": \"int\",\n    \"3-2\": \"Number of examples in the label.\",\n    \"3-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 4\n}\n[/block]\n##Query Parameters##\n\nBy default, this call returns 100 datasets. If you want to page through your datasets, use the `offset` and `count` query parameters.\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Description\",\n    \"h-2\": \"Available Version\",\n    \"0-2\": \"1.0\",\n    \"1-2\": \"1.0\",\n    \"0-0\": \"`count`\",\n    \"1-0\": \"`offset`\",\n    \"1-1\": \"Index of the dataset from which you want to start paging. Optional.\",\n    \"0-1\": \"Number of datsets to return. Optional.\"\n  },\n  \"cols\": 3,\n  \"rows\": 2\n}\n[/block]\nHere's an example of these query parameters. If you omit the `count` parameter, the API returns 100 datasets. If you omit the `offset` parameter, paging starts at 0.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X GET -H \\\"Authorization: BEARER <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" https://api.metamind.io/v1/vision/datasets?offset=100&count=50\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nFor example, let's say you want to page through all of your datasets and show 50 at a time. The first call would have `offset=0` and `count=50`, the second call would have `offset=50` and `count=50`, and so on.","category":"57dee8de84019d2000e95af3","createdAt":"2016-09-23T21:53:15.555Z","excerpt":"Returns a list of datasets and their labels that were created using the specified API key. The response is sorted by dataset ID.","githubsync":"","hidden":false,"isReference":true,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":5,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"get-all-datasets","sync_unique":"","title":"Get All Datasets","type":"get","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

getGet All Datasets

Returns a list of datasets and their labels that were created using the specified API key. The response is sorted by dataset ID.

##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`data`", "0-1": "array", "0-2": "Array of `dataset` objects.", "0-3": "1.0", "1-0": "`object`", "1-1": "string", "1-2": "Object returned; in this case, `list`.", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block] ##Dataset Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`createdAt`", "1-1": "date", "1-2": "Date and time that the dataset was created.", "1-3": "1.0", "2-0": "`id`", "2-1": "long", "2-2": "Dataset ID.", "2-3": "1.0", "3-0": "`labelSummary`", "3-1": "object", "3-2": "Contains the `labels` array that contains all the labels for the dataset.", "3-3": "1.0", "4-0": "`name`", "4-1": "string", "4-2": "Name of the dataset.", "4-3": "1.0", "5-0": "`object`", "5-1": "string", "5-2": "Object returned; in this case, `dataset`.", "5-3": "1.0", "9-0": "`updatedAt`", "9-1": "date", "9-2": "Date and time that the dataset was last updated.", "9-3": "1.0", "0-0": "`available`", "0-1": "boolean", "0-2": "Specifies whether the dataset is ready to be trained.", "0-3": "1.0", "6-0": "`totalExamples`", "6-1": "int", "6-2": "Total number of examples in the dataset.", "6-3": "1.0", "7-0": "`totalLabels`", "7-1": "int", "7-2": "Total number of labels in the dataset.", "7-3": "1.0", "8-0": "`type`", "8-1": "string", "8-2": "Type of dataset data. Default is `image`.", "8-3": "1.0" }, "cols": 4, "rows": 10 } [/block] ##Labels Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "0-2": "ID of the dataset that the label belongs to.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "ID of the label.", "1-3": "1.0", "2-0": "`name`", "2-1": "string", "2-2": "Name of the label.", "2-3": "1.0", "3-0": "`numExamples`", "3-1": "int", "3-2": "Number of examples in the label.", "3-3": "1.0" }, "cols": 4, "rows": 4 } [/block] ##Query Parameters## By default, this call returns 100 datasets. If you want to page through your datasets, use the `offset` and `count` query parameters. [block:parameters] { "data": { "h-0": "Name", "h-1": "Description", "h-2": "Available Version", "0-2": "1.0", "1-2": "1.0", "0-0": "`count`", "1-0": "`offset`", "1-1": "Index of the dataset from which you want to start paging. Optional.", "0-1": "Number of datsets to return. Optional." }, "cols": 3, "rows": 2 } [/block] Here's an example of these query parameters. If you omit the `count` parameter, the API returns 100 datasets. If you omit the `offset` parameter, paging starts at 0. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: BEARER <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/datasets?offset=100&count=50", "language": "curl" } ] } [/block] For example, let's say you want to page through all of your datasets and show 50 at a time. The first call would have `offset=0` and `count=50`, the second call would have `offset=50` and `count=50`, and so on.

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`data`", "0-1": "array", "0-2": "Array of `dataset` objects.", "0-3": "1.0", "1-0": "`object`", "1-1": "string", "1-2": "Object returned; in this case, `list`.", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block] ##Dataset Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`createdAt`", "1-1": "date", "1-2": "Date and time that the dataset was created.", "1-3": "1.0", "2-0": "`id`", "2-1": "long", "2-2": "Dataset ID.", "2-3": "1.0", "3-0": "`labelSummary`", "3-1": "object", "3-2": "Contains the `labels` array that contains all the labels for the dataset.", "3-3": "1.0", "4-0": "`name`", "4-1": "string", "4-2": "Name of the dataset.", "4-3": "1.0", "5-0": "`object`", "5-1": "string", "5-2": "Object returned; in this case, `dataset`.", "5-3": "1.0", "9-0": "`updatedAt`", "9-1": "date", "9-2": "Date and time that the dataset was last updated.", "9-3": "1.0", "0-0": "`available`", "0-1": "boolean", "0-2": "Specifies whether the dataset is ready to be trained.", "0-3": "1.0", "6-0": "`totalExamples`", "6-1": "int", "6-2": "Total number of examples in the dataset.", "6-3": "1.0", "7-0": "`totalLabels`", "7-1": "int", "7-2": "Total number of labels in the dataset.", "7-3": "1.0", "8-0": "`type`", "8-1": "string", "8-2": "Type of dataset data. Default is `image`.", "8-3": "1.0" }, "cols": 4, "rows": 10 } [/block] ##Labels Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "0-2": "ID of the dataset that the label belongs to.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "ID of the label.", "1-3": "1.0", "2-0": "`name`", "2-1": "string", "2-2": "Name of the label.", "2-3": "1.0", "3-0": "`numExamples`", "3-1": "int", "3-2": "Number of examples in the label.", "3-3": "1.0" }, "cols": 4, "rows": 4 } [/block] ##Query Parameters## By default, this call returns 100 datasets. If you want to page through your datasets, use the `offset` and `count` query parameters. [block:parameters] { "data": { "h-0": "Name", "h-1": "Description", "h-2": "Available Version", "0-2": "1.0", "1-2": "1.0", "0-0": "`count`", "1-0": "`offset`", "1-1": "Index of the dataset from which you want to start paging. Optional.", "0-1": "Number of datsets to return. Optional." }, "cols": 3, "rows": 2 } [/block] Here's an example of these query parameters. If you omit the `count` parameter, the API returns 100 datasets. If you omit the `offset` parameter, paging starts at 0. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: BEARER <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/datasets?offset=100&count=50", "language": "curl" } ] } [/block] For example, let's say you want to page through all of your datasets and show 50 at a time. The first call would have `offset=0` and `count=50`, the second call would have `offset=50` and `count=50`, and so on.
{"__v":1,"_id":"57e5a65c6dec2419008de645","api":{"auth":"required","examples":{"codes":[{"language":"curl","code":"curl -X DELETE -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/datasets/108"}]},"method":"delete","params":[],"results":{"codes":[{"name":"","code":"{}","language":"json","status":204},{"name":"","code":"{}","language":"json","status":400}]},"settings":"","url":"/vision/<DATASET_ID>"},"body":"This call doesn’t return a response body. Instead, it returns an HTTP status code 204.","category":"57dee8de84019d2000e95af3","createdAt":"2016-09-23T22:02:04.938Z","excerpt":"Deletes the specified dataset and associated labels, examples, and models.","githubsync":"","hidden":false,"isReference":true,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":6,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"delete-a-dataset","sync_unique":"","title":"Delete a Dataset","type":"delete","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

deleteDelete a Dataset

Deletes the specified dataset and associated labels, examples, and models.

This call doesn’t return a response body. Instead, it returns an HTTP status code 204.

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



This call doesn’t return a response body. Instead, it returns an HTTP status code 204.
{"__v":1,"_id":"57e5a7ce00c8680e00fae833","api":{"auth":"required","examples":{"codes":[{"language":"curl","code":"curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=beach\" https://api.metamind.io/v1/vision/datasets/57/labels"}]},"method":"post","params":[],"results":{"codes":[{"name":"","code":"{\n  \"id\": 614,\n  \"datasetId\": 57,\n  \"name\": \"beach\",\n  \"object\": \"label\"\n}","language":"json","status":200},{"name":"","code":"{\n  \"message\": \"Adding labels to datasets after training has started is not supported\"\n}","language":"json","status":400}]},"settings":"","url":"/vision/<DATASET_ID>/labels"},"body":"##Request Parameters##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"`name`\",\n    \"0-1\": \"string\",\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-2\": \"Name of the label. Must be unique in the dataset; otherwise, you receive an HTTP 400 error. Maximum length is 180 characters.\",\n    \"0-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 1\n}\n[/block]\nYou can add a label only before the dataset has been successfully trained. If the dataset has an associated model with a status of `QUEUED`, `RUNNING`, or `SUCCEEDED`, adding a label returns an error. If the dataset has only models with a `FAILED` status, you can continue to add labels.\n\nKeep the following points in mind when creating labels.\n- You can’t delete a label. To change the labels in a dataset, recreate the dataset with the correct labels.\n- The label name must be unique within the dataset. Otherwise, the call returns an HTTP status of 400 Failed.\n- Unicode characters aren't supported in label names. If your names contain Unicode characters, you'll see unpredictable results when creating labels or training the dataset.\n- We recommend a maximum of 250 labels per dataset.\n- A dataset must have a minimum of two labels to create a model.\n\n##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"1-0\": \"`id`\",\n    \"1-1\": \"long\",\n    \"1-2\": \"ID of the label.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`name`\",\n    \"2-1\": \"string\",\n    \"2-2\": \"Name of the label.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`object`\",\n    \"3-1\": \"string\",\n    \"3-2\": \"Object returned; in this case, `label`.\",\n    \"3-3\": \"1.0\",\n    \"0-0\": \"`datasetId`\",\n    \"0-1\": \"long\",\n    \"0-2\": \"ID of the dataset that the label belongs to.\",\n    \"0-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 4\n}\n[/block]","category":"57dee8de84019d2000e95af3","createdAt":"2016-09-23T22:08:14.712Z","excerpt":"Creates a label in the specified dataset.","githubsync":"","hidden":false,"isReference":true,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":7,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"create-a-label","sync_unique":"","title":"Create a Label","type":"post","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

postCreate a Label

Creates a label in the specified dataset.

##Request Parameters## [block:parameters] { "data": { "0-0": "`name`", "0-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "Name of the label. Must be unique in the dataset; otherwise, you receive an HTTP 400 error. Maximum length is 180 characters.", "0-3": "1.0" }, "cols": 4, "rows": 1 } [/block] You can add a label only before the dataset has been successfully trained. If the dataset has an associated model with a status of `QUEUED`, `RUNNING`, or `SUCCEEDED`, adding a label returns an error. If the dataset has only models with a `FAILED` status, you can continue to add labels. Keep the following points in mind when creating labels. - You can’t delete a label. To change the labels in a dataset, recreate the dataset with the correct labels. - The label name must be unique within the dataset. Otherwise, the call returns an HTTP status of 400 Failed. - Unicode characters aren't supported in label names. If your names contain Unicode characters, you'll see unpredictable results when creating labels or training the dataset. - We recommend a maximum of 250 labels per dataset. - A dataset must have a minimum of two labels to create a model. ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`id`", "1-1": "long", "1-2": "ID of the label.", "1-3": "1.0", "2-0": "`name`", "2-1": "string", "2-2": "Name of the label.", "2-3": "1.0", "3-0": "`object`", "3-1": "string", "3-2": "Object returned; in this case, `label`.", "3-3": "1.0", "0-0": "`datasetId`", "0-1": "long", "0-2": "ID of the dataset that the label belongs to.", "0-3": "1.0" }, "cols": 4, "rows": 4 } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



##Request Parameters## [block:parameters] { "data": { "0-0": "`name`", "0-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "Name of the label. Must be unique in the dataset; otherwise, you receive an HTTP 400 error. Maximum length is 180 characters.", "0-3": "1.0" }, "cols": 4, "rows": 1 } [/block] You can add a label only before the dataset has been successfully trained. If the dataset has an associated model with a status of `QUEUED`, `RUNNING`, or `SUCCEEDED`, adding a label returns an error. If the dataset has only models with a `FAILED` status, you can continue to add labels. Keep the following points in mind when creating labels. - You can’t delete a label. To change the labels in a dataset, recreate the dataset with the correct labels. - The label name must be unique within the dataset. Otherwise, the call returns an HTTP status of 400 Failed. - Unicode characters aren't supported in label names. If your names contain Unicode characters, you'll see unpredictable results when creating labels or training the dataset. - We recommend a maximum of 250 labels per dataset. - A dataset must have a minimum of two labels to create a model. ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`id`", "1-1": "long", "1-2": "ID of the label.", "1-3": "1.0", "2-0": "`name`", "2-1": "string", "2-2": "Name of the label.", "2-3": "1.0", "3-0": "`object`", "3-1": "string", "3-2": "Object returned; in this case, `label`.", "3-3": "1.0", "0-0": "`datasetId`", "0-1": "long", "0-2": "ID of the dataset that the label belongs to.", "0-3": "1.0" }, "cols": 4, "rows": 4 } [/block]
{"__v":1,"_id":"57e5ad09bc0e7d0e00bf34e7","api":{"auth":"required","examples":{"codes":[{"code":"curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" \"https://api.metamind.io/v1/vision/datasets/57/labels/614","language":"curl"}]},"method":"get","params":[],"results":{"codes":[{"status":200,"language":"json","code":"{\n  \"id\": 614,\n  \"datasetId\": 57,\n  \"name\": \"beach,\n  \"object\": \"label\"\n}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","url":"/vision/datasets/<DATASET_ID>/labels/<LABEL_ID>"},"body":"##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`datasetId`\",\n    \"0-1\": \"long\",\n    \"0-2\": \"ID of the dataset that the label belongs to.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`id`\",\n    \"1-1\": \"long\",\n    \"1-2\": \"ID of the label.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`name`\",\n    \"2-1\": \"string\",\n    \"2-2\": \"Name of the label.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`object`\",\n    \"3-1\": \"string\",\n    \"3-2\": \"Object returned; in this case, `label`.\",\n    \"3-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 4\n}\n[/block]","category":"57dee8de84019d2000e95af3","createdAt":"2016-09-23T22:30:33.952Z","excerpt":"Returns the label for the specified ID.","githubsync":"","hidden":false,"isReference":true,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":8,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"get-a-label","sync_unique":"","title":"Get a Label","type":"get","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

getGet a Label

Returns the label for the specified ID.

##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "0-2": "ID of the dataset that the label belongs to.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "ID of the label.", "1-3": "1.0", "2-0": "`name`", "2-1": "string", "2-2": "Name of the label.", "2-3": "1.0", "3-0": "`object`", "3-1": "string", "3-2": "Object returned; in this case, `label`.", "3-3": "1.0" }, "cols": 4, "rows": 4 } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "0-2": "ID of the dataset that the label belongs to.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "ID of the label.", "1-3": "1.0", "2-0": "`name`", "2-1": "string", "2-2": "Name of the label.", "2-3": "1.0", "3-0": "`object`", "3-1": "string", "3-2": "Object returned; in this case, `label`.", "3-3": "1.0" }, "cols": 4, "rows": 4 } [/block]
{"__v":0,"_id":"58a632c03239fa0f0085756b","api":{"auth":"required","examples":{"codes":[{"language":"curl","code":"curl -X PUT -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"data=@C:\\Data\\mountainvsbeach.zip\"  https://api.metamind.io/v1/vision/datasets/1000022/upload\n\ncurl -X PUT -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"path=http://metamind.io/images/mountainvsbeach.zip\"  https://api.metamind.io/v1/vision/datasets/1000022/upload"}]},"method":"put","params":[],"results":{"codes":[{"name":"","code":"{\n  \"id\": 1000022,\n  \"name\": \"mountainvsbeach\",\n  \"createdAt\": \"2017-02-17T00:22:10.000+0000\",\n  \"updatedAt\": \"2017-02-17T00:22:12.000+0000\",\n  \"labelSummary\": {\n    \"labels\": [\n      {\n        \"id\": 1819,\n        \"datasetId\": 1000022,\n        \"name\": \"Mountains\",\n        \"numExamples\": 50\n      },\n      {\n        \"id\": 1820,\n        \"datasetId\": 1000022,\n        \"name\": \"Beaches\",\n        \"numExamples\": 49\n      }\n    ]\n  },\n  \"totalExamples\": 99,\n  \"totalLabels\": 2,\n  \"available\": false,\n  \"statusMsg\": \"UPLOADING\",\n  \"type\": \"image\",\n  \"object\": \"dataset\"\n}","language":"json","status":200},{"name":"","code":"{}","language":"json","status":400}]},"settings":"","url":"/vision/datasets/<DATASET_ID>/upload"},"body":"##Request Parameters##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"`data`\",\n    \"0-1\": \"string\",\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-2\": \"Path to the .zip file on the local drive. The maximum file size you can upload from a local drive is 50 MB.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`path`\",\n    \"1-1\": \"string\",\n    \"1-2\": \"URL of the .zip file. The maximum file size you can upload from a web location is 1 GB.\",\n    \"1-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 2\n}\n[/block]\nYou must provide the path to the .zip file on either the local machine or in the cloud. This call adds examples to the specified dataset from a .zip file. This is an asynchronous call, so the results that are initially returned contain information for the original dataset and `available` is `false`. \n\nUse the dataset ID and make a call to [Get a Dataset](doc:get-a-dataset) to query when the upload is complete. When `available` is `true` the data upload is complete. \n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"id\\\": 1000022,\\n  \\\"name\\\": \\\"mountainvsbeach\\\",\\n  \\\"createdAt\\\": \\\"2017-02-17T00:22:10.000+0000\\\",\\n  \\\"updatedAt\\\": \\\"2017-02-17T00:29:56.000+0000\\\",\\n  \\\"labelSummary\\\": {\\n    \\\"labels\\\": [\\n      {\\n        \\\"id\\\": 1819,\\n        \\\"datasetId\\\": 1000022,\\n        \\\"name\\\": \\\"Mountains\\\",\\n        \\\"numExamples\\\": 150\\n      },\\n      {\\n        \\\"id\\\": 1820,\\n        \\\"datasetId\\\": 1000022,\\n        \\\"name\\\": \\\"Beaches\\\",\\n        \\\"numExamples\\\": 147\\n      }\\n    ]\\n  },\\n  \\\"totalExamples\\\": 297,\\n  \\\"totalLabels\\\": 2,\\n  \\\"available\\\": true,\\n  \\\"statusMsg\\\": \\\"SUCCEEDED\\\",\\n  \\\"type\\\": \\\"image\\\",\\n  \\\"object\\\": \\\"dataset\\\"\\n}\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nKeep the following points in mind when creating examples from a .zip file:\n- If the .zip file contains a directory label that's already in the dataset, the API adds the images from that directory to the specified label in the dataset.\n \n- If the .zip file contains a directory name that isn't a label in the dataset, the API adds a new label (limit is 20 characters).\n\n- If you try to create examples in a dataset while a previous call to create examples is still processing (the dataset's `available` value is `false`), the call fails and you receive an error. You must wait until the dataset's `available` value is `true` before starting another upload.\n\n- The .zip file must have a specific directory structure:\n - In the root, there should be a parent directory that contains subdirectories. \n - Each subdirectory below the parent directory becomes a label in the dataset unless the directory name matches a label that's already in the dataset. This subdirectory must contain images to be added to the dataset.\n - Each subdirectory below the parent directory should contain only images and not any nested subdirectories.\n\n\n- If the .zip file has an incorrect structure, the API returns an error: `FAILED: Invalid zip format provided for <dataset_name>`.\n\n- The maximum image file name length is 100 characters including the file extension.  If the .zip file contains a file with a name greater than 100 characters (including the file extension), the example is created in the dataset but API truncates the example name to 100 characters.\n\n- The maximum directory name length is 20 characters. If the .zip file contains a directory with a name greater than 20 characters, the label is created in the dataset, but  the API truncates the label name to 20 characters.\n\n- Unicode characters aren't supported in .zip file names, directory names, or image file names. If you have any non-ASCII characters in any of these names, you'll see unpredictable names and training the dataset might fail. For example, if the .zip file contains an image file called HôtelDeVille.jpg, you might be able to upload the example, but the API returns an error when you try to train the dataset.\n\n- Image files must be smaller than 1 MB. If the .zip file contains image files larger than 1 MB, the image won't be loaded and no error is returned.\n\n- Images must be no larger than 2,000 pixels high by 2,000 pixels wide. You can upload images that are larger, but training the dataset might fail.\n\n- Supported image file types are PNG, JPG, and JPEG. If the .zip file contains any unsupported image file types, those images won't be uploaded and no error is returned.\n\n- In the case of duplicate image files in the .zip file, only the first file is uploaded. If there's more than one image file in the same directory, with the same name and the same file contents, only the first file is uploaded and the others are skipped. However, this API doesn't check for duplicates between the .zip file and the images already in the dataset.\n\n- If the .zip file contains an image file that has a name containing spaces, the spaces are removed from the file name before the file is uploaded. For example, if you have a file called `sandy beach.jpg` the example name becomes `sandybeach.jpg`.\n\n- When specifying the URL for a .zip file in a cloud drive service like Dropbox, be sure it's a link to the file and not a link to the interactive download page. For example, the URL should look like `https://www.dropbox.com/s/abcdxyz/mountainvsbeach.zip?dl=1`\n\n- If you create a dataset or upload images from a .zip file in Apex code, be sure that you reference the URL to the file with `https` and not `http`.\n\n##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"2-0\": \"`id`\",\n    \"2-1\": \"long\",\n    \"2-2\": \"ID of the example.\",\n    \"2-3\": \"1.0\",\n    \"4-0\": \"`name`\",\n    \"4-1\": \"string\",\n    \"4-2\": \"Name of the example.\",\n    \"4-3\": \"1.0\",\n    \"5-0\": \"`object`\",\n    \"5-1\": \"string\",\n    \"5-2\": \"Object returned; in this case, `dataset`.\",\n    \"5-3\": \"1.0\",\n    \"1-0\": \"`createdAt`\",\n    \"1-1\": \"date\",\n    \"1-2\": \"Date and time that the example was created.\",\n    \"1-3\": \"1.0\",\n    \"3-0\": \"`labelSummary`\",\n    \"3-1\": \"object\",\n    \"3-2\": \"Contains the `labels` array that contains all the labels for the dataset.\",\n    \"3-3\": \"1.0\",\n    \"0-0\": \"`available`\",\n    \"0-1\": \"boolean\",\n    \"0-2\": \"Specifies whether the dataset is ready to be trained.\",\n    \"0-3\": \"1.0\",\n    \"7-0\": \"`totalExamples`\",\n    \"7-1\": \"int\",\n    \"7-2\": \"Total number of examples in the dataset.\",\n    \"7-3\": \"1.0\",\n    \"8-0\": \"`totalLabels`\",\n    \"8-1\": \"int\",\n    \"8-2\": \"Total number of labels in the dataset.\",\n    \"8-3\": \"1.0\",\n    \"10-0\": \"`updatedAt`\",\n    \"10-1\": \"date\",\n    \"10-2\": \"Date and time that the datset was last updated.\",\n    \"10-3\": \"1.0\",\n    \"6-0\": \"`statusMsg`\",\n    \"6-1\": \"string\",\n    \"6-3\": \"1.0\",\n    \"6-2\": \"Status of the dataset creation and data upload. Valid values are:\\n- `FAILURE: <failure_reason>`—Data upload has failed.\\n- `SUCCEEDED`—Data upload is complete.\\n- `UPLOADING`—Data upload is in progress.\",\n    \"9-0\": \"`type`\",\n    \"9-1\": \"string\",\n    \"9-2\": \"Type of dataset data. Default is `image`.\",\n    \"9-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 11\n}\n[/block]\n##Label Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`datasetId`\",\n    \"0-1\": \"long\",\n    \"1-0\": \"`id`\",\n    \"1-1\": \"long\",\n    \"2-0\": \"`name`\",\n    \"2-1\": \"string\",\n    \"0-2\": \"ID of the dataset that the label belongs to.\",\n    \"1-2\": \"ID of the label.\",\n    \"2-2\": \"Name of the label.\",\n    \"0-3\": \"1.0\",\n    \"1-3\": \"1.0\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`numExamples`\",\n    \"3-1\": \"int\",\n    \"3-2\": \"Number of examples that have the label.\",\n    \"3-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 4\n}\n[/block]","category":"57dee8de84019d2000e95af3","createdAt":"2017-02-16T23:16:16.110Z","excerpt":"Adds examples from a .zip file to a dataset. You can use this call only with a dataset that was created from a .zip file.","githubsync":"","hidden":false,"isReference":true,"link_external":false,"link_url":"","next":{"pages":[],"description":""},"order":9,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"create-examples-from-zip","sync_unique":"","title":"Create Examples From a Zip File","type":"put","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

putCreate Examples From a Zip File

Adds examples from a .zip file to a dataset. You can use this call only with a dataset that was created from a .zip file.

##Request Parameters## [block:parameters] { "data": { "0-0": "`data`", "0-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "Path to the .zip file on the local drive. The maximum file size you can upload from a local drive is 50 MB.", "0-3": "1.0", "1-0": "`path`", "1-1": "string", "1-2": "URL of the .zip file. The maximum file size you can upload from a web location is 1 GB.", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block] You must provide the path to the .zip file on either the local machine or in the cloud. This call adds examples to the specified dataset from a .zip file. This is an asynchronous call, so the results that are initially returned contain information for the original dataset and `available` is `false`. Use the dataset ID and make a call to [Get a Dataset](doc:get-a-dataset) to query when the upload is complete. When `available` is `true` the data upload is complete. [block:code] { "codes": [ { "code": "{\n \"id\": 1000022,\n \"name\": \"mountainvsbeach\",\n \"createdAt\": \"2017-02-17T00:22:10.000+0000\",\n \"updatedAt\": \"2017-02-17T00:29:56.000+0000\",\n \"labelSummary\": {\n \"labels\": [\n {\n \"id\": 1819,\n \"datasetId\": 1000022,\n \"name\": \"Mountains\",\n \"numExamples\": 150\n },\n {\n \"id\": 1820,\n \"datasetId\": 1000022,\n \"name\": \"Beaches\",\n \"numExamples\": 147\n }\n ]\n },\n \"totalExamples\": 297,\n \"totalLabels\": 2,\n \"available\": true,\n \"statusMsg\": \"SUCCEEDED\",\n \"type\": \"image\",\n \"object\": \"dataset\"\n}", "language": "curl" } ] } [/block] Keep the following points in mind when creating examples from a .zip file: - If the .zip file contains a directory label that's already in the dataset, the API adds the images from that directory to the specified label in the dataset. - If the .zip file contains a directory name that isn't a label in the dataset, the API adds a new label (limit is 20 characters). - If you try to create examples in a dataset while a previous call to create examples is still processing (the dataset's `available` value is `false`), the call fails and you receive an error. You must wait until the dataset's `available` value is `true` before starting another upload. - The .zip file must have a specific directory structure: - In the root, there should be a parent directory that contains subdirectories. - Each subdirectory below the parent directory becomes a label in the dataset unless the directory name matches a label that's already in the dataset. This subdirectory must contain images to be added to the dataset. - Each subdirectory below the parent directory should contain only images and not any nested subdirectories. - If the .zip file has an incorrect structure, the API returns an error: `FAILED: Invalid zip format provided for <dataset_name>`. - The maximum image file name length is 100 characters including the file extension. If the .zip file contains a file with a name greater than 100 characters (including the file extension), the example is created in the dataset but API truncates the example name to 100 characters. - The maximum directory name length is 20 characters. If the .zip file contains a directory with a name greater than 20 characters, the label is created in the dataset, but the API truncates the label name to 20 characters. - Unicode characters aren't supported in .zip file names, directory names, or image file names. If you have any non-ASCII characters in any of these names, you'll see unpredictable names and training the dataset might fail. For example, if the .zip file contains an image file called HôtelDeVille.jpg, you might be able to upload the example, but the API returns an error when you try to train the dataset. - Image files must be smaller than 1 MB. If the .zip file contains image files larger than 1 MB, the image won't be loaded and no error is returned. - Images must be no larger than 2,000 pixels high by 2,000 pixels wide. You can upload images that are larger, but training the dataset might fail. - Supported image file types are PNG, JPG, and JPEG. If the .zip file contains any unsupported image file types, those images won't be uploaded and no error is returned. - In the case of duplicate image files in the .zip file, only the first file is uploaded. If there's more than one image file in the same directory, with the same name and the same file contents, only the first file is uploaded and the others are skipped. However, this API doesn't check for duplicates between the .zip file and the images already in the dataset. - If the .zip file contains an image file that has a name containing spaces, the spaces are removed from the file name before the file is uploaded. For example, if you have a file called `sandy beach.jpg` the example name becomes `sandybeach.jpg`. - When specifying the URL for a .zip file in a cloud drive service like Dropbox, be sure it's a link to the file and not a link to the interactive download page. For example, the URL should look like `https://www.dropbox.com/s/abcdxyz/mountainvsbeach.zip?dl=1` - If you create a dataset or upload images from a .zip file in Apex code, be sure that you reference the URL to the file with `https` and not `http`. ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "2-0": "`id`", "2-1": "long", "2-2": "ID of the example.", "2-3": "1.0", "4-0": "`name`", "4-1": "string", "4-2": "Name of the example.", "4-3": "1.0", "5-0": "`object`", "5-1": "string", "5-2": "Object returned; in this case, `dataset`.", "5-3": "1.0", "1-0": "`createdAt`", "1-1": "date", "1-2": "Date and time that the example was created.", "1-3": "1.0", "3-0": "`labelSummary`", "3-1": "object", "3-2": "Contains the `labels` array that contains all the labels for the dataset.", "3-3": "1.0", "0-0": "`available`", "0-1": "boolean", "0-2": "Specifies whether the dataset is ready to be trained.", "0-3": "1.0", "7-0": "`totalExamples`", "7-1": "int", "7-2": "Total number of examples in the dataset.", "7-3": "1.0", "8-0": "`totalLabels`", "8-1": "int", "8-2": "Total number of labels in the dataset.", "8-3": "1.0", "10-0": "`updatedAt`", "10-1": "date", "10-2": "Date and time that the datset was last updated.", "10-3": "1.0", "6-0": "`statusMsg`", "6-1": "string", "6-3": "1.0", "6-2": "Status of the dataset creation and data upload. Valid values are:\n- `FAILURE: <failure_reason>`—Data upload has failed.\n- `SUCCEEDED`—Data upload is complete.\n- `UPLOADING`—Data upload is in progress.", "9-0": "`type`", "9-1": "string", "9-2": "Type of dataset data. Default is `image`.", "9-3": "1.0" }, "cols": 4, "rows": 11 } [/block] ##Label Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "1-0": "`id`", "1-1": "long", "2-0": "`name`", "2-1": "string", "0-2": "ID of the dataset that the label belongs to.", "1-2": "ID of the label.", "2-2": "Name of the label.", "0-3": "1.0", "1-3": "1.0", "2-3": "1.0", "3-0": "`numExamples`", "3-1": "int", "3-2": "Number of examples that have the label.", "3-3": "1.0" }, "cols": 4, "rows": 4 } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



##Request Parameters## [block:parameters] { "data": { "0-0": "`data`", "0-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "Path to the .zip file on the local drive. The maximum file size you can upload from a local drive is 50 MB.", "0-3": "1.0", "1-0": "`path`", "1-1": "string", "1-2": "URL of the .zip file. The maximum file size you can upload from a web location is 1 GB.", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block] You must provide the path to the .zip file on either the local machine or in the cloud. This call adds examples to the specified dataset from a .zip file. This is an asynchronous call, so the results that are initially returned contain information for the original dataset and `available` is `false`. Use the dataset ID and make a call to [Get a Dataset](doc:get-a-dataset) to query when the upload is complete. When `available` is `true` the data upload is complete. [block:code] { "codes": [ { "code": "{\n \"id\": 1000022,\n \"name\": \"mountainvsbeach\",\n \"createdAt\": \"2017-02-17T00:22:10.000+0000\",\n \"updatedAt\": \"2017-02-17T00:29:56.000+0000\",\n \"labelSummary\": {\n \"labels\": [\n {\n \"id\": 1819,\n \"datasetId\": 1000022,\n \"name\": \"Mountains\",\n \"numExamples\": 150\n },\n {\n \"id\": 1820,\n \"datasetId\": 1000022,\n \"name\": \"Beaches\",\n \"numExamples\": 147\n }\n ]\n },\n \"totalExamples\": 297,\n \"totalLabels\": 2,\n \"available\": true,\n \"statusMsg\": \"SUCCEEDED\",\n \"type\": \"image\",\n \"object\": \"dataset\"\n}", "language": "curl" } ] } [/block] Keep the following points in mind when creating examples from a .zip file: - If the .zip file contains a directory label that's already in the dataset, the API adds the images from that directory to the specified label in the dataset. - If the .zip file contains a directory name that isn't a label in the dataset, the API adds a new label (limit is 20 characters). - If you try to create examples in a dataset while a previous call to create examples is still processing (the dataset's `available` value is `false`), the call fails and you receive an error. You must wait until the dataset's `available` value is `true` before starting another upload. - The .zip file must have a specific directory structure: - In the root, there should be a parent directory that contains subdirectories. - Each subdirectory below the parent directory becomes a label in the dataset unless the directory name matches a label that's already in the dataset. This subdirectory must contain images to be added to the dataset. - Each subdirectory below the parent directory should contain only images and not any nested subdirectories. - If the .zip file has an incorrect structure, the API returns an error: `FAILED: Invalid zip format provided for <dataset_name>`. - The maximum image file name length is 100 characters including the file extension. If the .zip file contains a file with a name greater than 100 characters (including the file extension), the example is created in the dataset but API truncates the example name to 100 characters. - The maximum directory name length is 20 characters. If the .zip file contains a directory with a name greater than 20 characters, the label is created in the dataset, but the API truncates the label name to 20 characters. - Unicode characters aren't supported in .zip file names, directory names, or image file names. If you have any non-ASCII characters in any of these names, you'll see unpredictable names and training the dataset might fail. For example, if the .zip file contains an image file called HôtelDeVille.jpg, you might be able to upload the example, but the API returns an error when you try to train the dataset. - Image files must be smaller than 1 MB. If the .zip file contains image files larger than 1 MB, the image won't be loaded and no error is returned. - Images must be no larger than 2,000 pixels high by 2,000 pixels wide. You can upload images that are larger, but training the dataset might fail. - Supported image file types are PNG, JPG, and JPEG. If the .zip file contains any unsupported image file types, those images won't be uploaded and no error is returned. - In the case of duplicate image files in the .zip file, only the first file is uploaded. If there's more than one image file in the same directory, with the same name and the same file contents, only the first file is uploaded and the others are skipped. However, this API doesn't check for duplicates between the .zip file and the images already in the dataset. - If the .zip file contains an image file that has a name containing spaces, the spaces are removed from the file name before the file is uploaded. For example, if you have a file called `sandy beach.jpg` the example name becomes `sandybeach.jpg`. - When specifying the URL for a .zip file in a cloud drive service like Dropbox, be sure it's a link to the file and not a link to the interactive download page. For example, the URL should look like `https://www.dropbox.com/s/abcdxyz/mountainvsbeach.zip?dl=1` - If you create a dataset or upload images from a .zip file in Apex code, be sure that you reference the URL to the file with `https` and not `http`. ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "2-0": "`id`", "2-1": "long", "2-2": "ID of the example.", "2-3": "1.0", "4-0": "`name`", "4-1": "string", "4-2": "Name of the example.", "4-3": "1.0", "5-0": "`object`", "5-1": "string", "5-2": "Object returned; in this case, `dataset`.", "5-3": "1.0", "1-0": "`createdAt`", "1-1": "date", "1-2": "Date and time that the example was created.", "1-3": "1.0", "3-0": "`labelSummary`", "3-1": "object", "3-2": "Contains the `labels` array that contains all the labels for the dataset.", "3-3": "1.0", "0-0": "`available`", "0-1": "boolean", "0-2": "Specifies whether the dataset is ready to be trained.", "0-3": "1.0", "7-0": "`totalExamples`", "7-1": "int", "7-2": "Total number of examples in the dataset.", "7-3": "1.0", "8-0": "`totalLabels`", "8-1": "int", "8-2": "Total number of labels in the dataset.", "8-3": "1.0", "10-0": "`updatedAt`", "10-1": "date", "10-2": "Date and time that the datset was last updated.", "10-3": "1.0", "6-0": "`statusMsg`", "6-1": "string", "6-3": "1.0", "6-2": "Status of the dataset creation and data upload. Valid values are:\n- `FAILURE: <failure_reason>`—Data upload has failed.\n- `SUCCEEDED`—Data upload is complete.\n- `UPLOADING`—Data upload is in progress.", "9-0": "`type`", "9-1": "string", "9-2": "Type of dataset data. Default is `image`.", "9-3": "1.0" }, "cols": 4, "rows": 11 } [/block] ##Label Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "1-0": "`id`", "1-1": "long", "2-0": "`name`", "2-1": "string", "0-2": "ID of the dataset that the label belongs to.", "1-2": "ID of the label.", "2-2": "Name of the label.", "0-3": "1.0", "1-3": "1.0", "2-3": "1.0", "3-0": "`numExamples`", "3-1": "int", "3-2": "Number of examples that have the label.", "3-3": "1.0" }, "cols": 4, "rows": 4 } [/block]
{"__v":1,"_id":"57e5b08d26c33a1700a94b1b","api":{"auth":"required","examples":{"codes":[{"language":"curl","code":"curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=77880132.jpg\" -F \"labelId=614\" -F \"data=@C:\\Mountains vs Beach\\Beaches\\77880132.jpg\" https://api.metamind.io/v1/vision/datasets/57/examples"}]},"method":"post","params":[],"results":{"codes":[{"name":"","code":"{\n  \"id\": 43887,\n  \"name\": \"77880132.jpg\",\n  \"location\": \"https://jBke4mtMuOjrCK3A04Q79O5TBySI2BC3zqi7...\",\n  \"createdAt\": \"2016-09-15T23:18:13.000+0000\",\n  \"label\": {\n    \"id\": 614,\n    \"datasetId\": 57,\n    \"name\": \"beach\",\n    \"numExamples\": 50\n  },\n  \"object\": \"example\"\n}","language":"json","status":200},{"name":"","code":"{}","language":"json","status":400}]},"settings":"","url":"/vision/datasets/<DATASET_ID>/examples"},"body":"##Request Parameters##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"`data`\",\n    \"0-1\": \"string\",\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-2\": \"Location of the local image file to upload.\",\n    \"0-3\": \"1.0\",\n    \"2-0\": \"`name`\",\n    \"1-0\": \"`labelId`\",\n    \"1-1\": \"long\",\n    \"2-1\": \"string\",\n    \"2-2\": \"Name of the example. Maximum length is 180 characters.\",\n    \"1-2\": \"ID of the label to add the example to.\",\n    \"1-3\": \"1.0\",\n    \"2-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 3\n}\n[/block]\nKeep the following points in mind when creating examples.\n- You can only add an example to a dataset using this API if the dataset was created using the [Create a Dataset](doc:create-a-dataset) call. You can't use this call with a dataset that was created from a .zip file.\n- After you add an example, you can return information about it, along with a URL to access the image. The URL expires in 30 mins.\n- Add an example to only one label.\n- The maximum image file size is 1 MB.\n- We recommend a minimum of 100 examples per label.\n- The maximum number of examples that you can add by using this call is 3,000. If you have more than 3,000 examples, we recommend that you use the .zip upload API to create the dataset. See [Create a Dataset From a Zip File Asynchronously](doc:create-a-dataset-zip-async) and [Create a Dataset From a Zip File Synchronously](doc:create-a-dataset-zip-sync).\n- Unicode characters aren't supported in example names. If you have names that contain Unicode characters, you'll see unpredictable results when creating examples or training the dataset.\n- The supported image file types are PNG, JPG, and JPEG.\n\n##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"1-0\": \"`id`\",\n    \"1-1\": \"long\",\n    \"1-2\": \"ID of the example.\",\n    \"1-3\": \"1.0\",\n    \"4-0\": \"`name`\",\n    \"4-1\": \"string\",\n    \"4-2\": \"Name of the example.\",\n    \"4-3\": \"1.0\",\n    \"5-0\": \"`object`\",\n    \"5-1\": \"string\",\n    \"5-2\": \"Object returned; in this case, `example`.\",\n    \"5-3\": \"1.0\",\n    \"0-0\": \"`createdAt`\",\n    \"0-1\": \"date\",\n    \"0-2\": \"Date and time that the example was created.\",\n    \"0-3\": \"1.0\",\n    \"2-0\": \"`label`\",\n    \"2-1\": \"object\",\n    \"2-2\": \"Contains information about the label that the example is associated with.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`location`\",\n    \"3-1\": \"string\",\n    \"3-3\": \"1.0\",\n    \"3-2\": \"URL of the image in the dataset. This is a temporary URL that expires in 30 minutes. This URL can be used to display images that were uploaded to a dataset in a UI.\"\n  },\n  \"cols\": 4,\n  \"rows\": 6\n}\n[/block]\n##Label Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`datasetId`\",\n    \"0-1\": \"long\",\n    \"1-0\": \"`id`\",\n    \"1-1\": \"long\",\n    \"2-0\": \"`name`\",\n    \"2-1\": \"string\",\n    \"0-2\": \"ID of the dataset that the example’s label belongs to.\",\n    \"1-2\": \"ID of the example’s label.\",\n    \"2-2\": \"Name of the example’s label.\",\n    \"0-3\": \"1.0\",\n    \"1-3\": \"1.0\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`numExamples`\",\n    \"3-1\": \"int\",\n    \"3-2\": \"Number of examples that have the label.\",\n    \"3-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 4\n}\n[/block]","category":"57dee8de84019d2000e95af3","createdAt":"2016-09-23T22:45:33.433Z","excerpt":"Adds an example with the specified label to a dataset.","githubsync":"","hidden":false,"isReference":true,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":10,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"create-an-example","sync_unique":"","title":"Create an Example","type":"post","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

postCreate an Example

Adds an example with the specified label to a dataset.

##Request Parameters## [block:parameters] { "data": { "0-0": "`data`", "0-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "Location of the local image file to upload.", "0-3": "1.0", "2-0": "`name`", "1-0": "`labelId`", "1-1": "long", "2-1": "string", "2-2": "Name of the example. Maximum length is 180 characters.", "1-2": "ID of the label to add the example to.", "1-3": "1.0", "2-3": "1.0" }, "cols": 4, "rows": 3 } [/block] Keep the following points in mind when creating examples. - You can only add an example to a dataset using this API if the dataset was created using the [Create a Dataset](doc:create-a-dataset) call. You can't use this call with a dataset that was created from a .zip file. - After you add an example, you can return information about it, along with a URL to access the image. The URL expires in 30 mins. - Add an example to only one label. - The maximum image file size is 1 MB. - We recommend a minimum of 100 examples per label. - The maximum number of examples that you can add by using this call is 3,000. If you have more than 3,000 examples, we recommend that you use the .zip upload API to create the dataset. See [Create a Dataset From a Zip File Asynchronously](doc:create-a-dataset-zip-async) and [Create a Dataset From a Zip File Synchronously](doc:create-a-dataset-zip-sync). - Unicode characters aren't supported in example names. If you have names that contain Unicode characters, you'll see unpredictable results when creating examples or training the dataset. - The supported image file types are PNG, JPG, and JPEG. ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`id`", "1-1": "long", "1-2": "ID of the example.", "1-3": "1.0", "4-0": "`name`", "4-1": "string", "4-2": "Name of the example.", "4-3": "1.0", "5-0": "`object`", "5-1": "string", "5-2": "Object returned; in this case, `example`.", "5-3": "1.0", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the example was created.", "0-3": "1.0", "2-0": "`label`", "2-1": "object", "2-2": "Contains information about the label that the example is associated with.", "2-3": "1.0", "3-0": "`location`", "3-1": "string", "3-3": "1.0", "3-2": "URL of the image in the dataset. This is a temporary URL that expires in 30 minutes. This URL can be used to display images that were uploaded to a dataset in a UI." }, "cols": 4, "rows": 6 } [/block] ##Label Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "1-0": "`id`", "1-1": "long", "2-0": "`name`", "2-1": "string", "0-2": "ID of the dataset that the example’s label belongs to.", "1-2": "ID of the example’s label.", "2-2": "Name of the example’s label.", "0-3": "1.0", "1-3": "1.0", "2-3": "1.0", "3-0": "`numExamples`", "3-1": "int", "3-2": "Number of examples that have the label.", "3-3": "1.0" }, "cols": 4, "rows": 4 } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



##Request Parameters## [block:parameters] { "data": { "0-0": "`data`", "0-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "Location of the local image file to upload.", "0-3": "1.0", "2-0": "`name`", "1-0": "`labelId`", "1-1": "long", "2-1": "string", "2-2": "Name of the example. Maximum length is 180 characters.", "1-2": "ID of the label to add the example to.", "1-3": "1.0", "2-3": "1.0" }, "cols": 4, "rows": 3 } [/block] Keep the following points in mind when creating examples. - You can only add an example to a dataset using this API if the dataset was created using the [Create a Dataset](doc:create-a-dataset) call. You can't use this call with a dataset that was created from a .zip file. - After you add an example, you can return information about it, along with a URL to access the image. The URL expires in 30 mins. - Add an example to only one label. - The maximum image file size is 1 MB. - We recommend a minimum of 100 examples per label. - The maximum number of examples that you can add by using this call is 3,000. If you have more than 3,000 examples, we recommend that you use the .zip upload API to create the dataset. See [Create a Dataset From a Zip File Asynchronously](doc:create-a-dataset-zip-async) and [Create a Dataset From a Zip File Synchronously](doc:create-a-dataset-zip-sync). - Unicode characters aren't supported in example names. If you have names that contain Unicode characters, you'll see unpredictable results when creating examples or training the dataset. - The supported image file types are PNG, JPG, and JPEG. ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`id`", "1-1": "long", "1-2": "ID of the example.", "1-3": "1.0", "4-0": "`name`", "4-1": "string", "4-2": "Name of the example.", "4-3": "1.0", "5-0": "`object`", "5-1": "string", "5-2": "Object returned; in this case, `example`.", "5-3": "1.0", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the example was created.", "0-3": "1.0", "2-0": "`label`", "2-1": "object", "2-2": "Contains information about the label that the example is associated with.", "2-3": "1.0", "3-0": "`location`", "3-1": "string", "3-3": "1.0", "3-2": "URL of the image in the dataset. This is a temporary URL that expires in 30 minutes. This URL can be used to display images that were uploaded to a dataset in a UI." }, "cols": 4, "rows": 6 } [/block] ##Label Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "1-0": "`id`", "1-1": "long", "2-0": "`name`", "2-1": "string", "0-2": "ID of the dataset that the example’s label belongs to.", "1-2": "ID of the example’s label.", "2-2": "Name of the example’s label.", "0-3": "1.0", "1-3": "1.0", "2-3": "1.0", "3-0": "`numExamples`", "3-1": "int", "3-2": "Number of examples that have the label.", "3-3": "1.0" }, "cols": 4, "rows": 4 } [/block]
{"__v":1,"_id":"57e7138868bab10e006c5253","api":{"auth":"required","examples":{"codes":[{"code":"curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" \"https://api.metamind.io/v1/vision/datasets/57/examples/43887","language":"curl"}]},"method":"get","params":[],"results":{"codes":[{"status":200,"language":"json","code":"{\n  \"id\": 43887,\n  \"name\": \"77880132.jpg\",\n  \"location\": \"https://jBke4mtMuOjrCK3A04Q79O5TBySI2BC3zqi7...\",\n  \"createdAt\": \"2016-09-15T23:18:13.000+0000\",\n  \"label\": {\n    \"id\": 614,\n    \"datasetId\": 57,\n    \"name\": \"beach\",\n    \"numExamples\": 50\n  },\n  \"object\": \"example\"\n}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","url":"/vision/datasets/<DATASET_ID>/examples/<EXAMPLE_ID>"},"body":"##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`createdAt`\",\n    \"0-1\": \"date\",\n    \"0-2\": \"Date and time that the example was created.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`id`\",\n    \"1-1\": \"long\",\n    \"1-2\": \"ID of the example.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`label`\",\n    \"2-1\": \"object\",\n    \"2-2\": \"Contains information about the label that the example is associated with.\",\n    \"2-3\": \"1.0\",\n    \"4-0\": \"`name`\",\n    \"4-1\": \"string\",\n    \"4-2\": \"Name of the example.\",\n    \"4-3\": \"1.0\",\n    \"5-0\": \"`object`\",\n    \"5-1\": \"string\",\n    \"5-2\": \"Object returned; in this case, `example`.\",\n    \"5-3\": \"1.0\",\n    \"3-0\": \"`location`\",\n    \"3-1\": \"string\",\n    \"3-2\": \"URL of the image in the dataset. This is a temporary URL that expires in 30 minutes. This URL can be used to display images that were uploaded to a dataset in a UI.\",\n    \"3-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 6\n}\n[/block]\n##Label Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`datasetId`\",\n    \"0-1\": \"long\",\n    \"0-2\": \"ID of the dataset that the example's label belongs to.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`id`\",\n    \"1-1\": \"long\",\n    \"1-2\": \"ID of the example's label.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`name`\",\n    \"2-1\": \"string\",\n    \"2-2\": \"Name of the example's label.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`numExamples`\",\n    \"3-1\": \"int\",\n    \"3-2\": \"Number of examples that have the label.\",\n    \"3-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 4\n}\n[/block]","category":"57dee8de84019d2000e95af3","createdAt":"2016-09-25T00:00:08.749Z","excerpt":"Returns the example for the specified ID.","githubsync":"","hidden":false,"isReference":true,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":11,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"get-an-example","sync_unique":"","title":"Get an Example","type":"get","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

getGet an Example

Returns the example for the specified ID.

##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the example was created.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "ID of the example.", "1-3": "1.0", "2-0": "`label`", "2-1": "object", "2-2": "Contains information about the label that the example is associated with.", "2-3": "1.0", "4-0": "`name`", "4-1": "string", "4-2": "Name of the example.", "4-3": "1.0", "5-0": "`object`", "5-1": "string", "5-2": "Object returned; in this case, `example`.", "5-3": "1.0", "3-0": "`location`", "3-1": "string", "3-2": "URL of the image in the dataset. This is a temporary URL that expires in 30 minutes. This URL can be used to display images that were uploaded to a dataset in a UI.", "3-3": "1.0" }, "cols": 4, "rows": 6 } [/block] ##Label Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "0-2": "ID of the dataset that the example's label belongs to.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "ID of the example's label.", "1-3": "1.0", "2-0": "`name`", "2-1": "string", "2-2": "Name of the example's label.", "2-3": "1.0", "3-0": "`numExamples`", "3-1": "int", "3-2": "Number of examples that have the label.", "3-3": "1.0" }, "cols": 4, "rows": 4 } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the example was created.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "ID of the example.", "1-3": "1.0", "2-0": "`label`", "2-1": "object", "2-2": "Contains information about the label that the example is associated with.", "2-3": "1.0", "4-0": "`name`", "4-1": "string", "4-2": "Name of the example.", "4-3": "1.0", "5-0": "`object`", "5-1": "string", "5-2": "Object returned; in this case, `example`.", "5-3": "1.0", "3-0": "`location`", "3-1": "string", "3-2": "URL of the image in the dataset. This is a temporary URL that expires in 30 minutes. This URL can be used to display images that were uploaded to a dataset in a UI.", "3-3": "1.0" }, "cols": 4, "rows": 6 } [/block] ##Label Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "0-2": "ID of the dataset that the example's label belongs to.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "ID of the example's label.", "1-3": "1.0", "2-0": "`name`", "2-1": "string", "2-2": "Name of the example's label.", "2-3": "1.0", "3-0": "`numExamples`", "3-1": "int", "3-2": "Number of examples that have the label.", "3-3": "1.0" }, "cols": 4, "rows": 4 } [/block]
{"__v":1,"_id":"57e714a75f33650e00763871","api":{"auth":"required","examples":{"codes":[{"language":"curl","code":"curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/datasets/57/examples"}]},"method":"get","params":[],"results":{"codes":[{"status":200,"language":"json","code":"{\n  \"object\": \"list\",\n  \"data\": [\n    {\n      \"id\": 43888,\n      \"name\": \"659803277.jpg\",\n      \"location\": \"https://K3A04Q79O5TBySIZSeMIj%2BC3zqi7rOmeK...\",\n      \"createdAt\": \"2016-09-16T17:14:38.000+0000\",\n      \"label\": {\n        \"id\": 618,\n        \"datasetId\": 57,\n        \"name\": \"beach\",\n        \"numExamples\": 50\n    },\n      \"object\": \"example\"\n    },\n    {\n      \"id\": 43889,\n      \"name\": \"661860605.jpg\",\n      \"location\": \"https://jBke4mtMuOjrCK3A04Q79O5TBySI2BC3zqi7...\",\n      \"createdAt\": \"2016-09-16T17:14:42.000+0000\",\n      \"label\": {\n        \"id\": 618,\n        \"datasetId\": 57,\n        \"name\": \"beach\",\n        \"numExamples\": 50\n      },\n      \"object\": \"example\"\n    },\n    {\n      \"id\": 43890,\n      \"name\": \"660548647.jpg\",\n      \"location\": \"https://HKzY79n47nd%2F0%2FCem6PJBkUoyxMWVssCX...\",\n      \"createdAt\": \"2016-09-16T17:15:25.000+0000\",\n      \"label\": {\n        \"id\": 619,\n        \"datasetId\": 57,\n        \"name\": \"mountain\",\n        \"numExamples\": 49\n      },\n      \"object\": \"example\"\n    },\n    {\n      \"id\": 43891,\n      \"name\": \"578339672.jpg\",\n      \"location\": \"https://LRlXQeRyTVDiujSzHTabcJ2FGGnuGhAvedvu0D...\",\n      \"createdAt\": \"2016-09-16T17:15:29.000+0000\",\n      \"label\": {\n        \"id\": 619,\n        \"datasetId\": 57,\n        \"name\": \"mountain\",\n        \"numExamples\": 49\n      },\n      \"object\": \"example\"\n    }\n  ]\n}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","url":"/vision/datasets/<DATASET_ID>/examples"},"body":"##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`data`\",\n    \"0-1\": \"array\",\n    \"0-2\": \"Array of `example` objects.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`object`\",\n    \"1-1\": \"string\",\n    \"1-2\": \"Object returned; in this case, `list`.\",\n    \"1-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 2\n}\n[/block]\n##Example Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`createdAt`\",\n    \"0-1\": \"date\",\n    \"0-2\": \"Date and time that the example was created.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`id`\",\n    \"1-1\": \"long\",\n    \"1-2\": \"ID of the example.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`label`\",\n    \"2-1\": \"object\",\n    \"2-2\": \"Label that the example is associated with.\",\n    \"2-3\": \"1.0\",\n    \"4-0\": \"`name`\",\n    \"4-1\": \"string\",\n    \"4-2\": \"Name of the example.\",\n    \"4-3\": \"1.0\",\n    \"5-0\": \"`object`\",\n    \"5-1\": \"string\",\n    \"5-2\": \"Object returned; in this case, `example`.\",\n    \"5-3\": \"1.0\",\n    \"3-0\": \"`location`\",\n    \"3-1\": \"string\",\n    \"3-3\": \"1.0\",\n    \"3-2\": \"URL of the image in the dataset. This is a temporary URL that expires in 30 minutes. This URL can be used to display in a UI images that were uploaded to a dataset.\"\n  },\n  \"cols\": 4,\n  \"rows\": 6\n}\n[/block]\n##Label Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`datasetId`\",\n    \"0-1\": \"long\",\n    \"0-2\": \"ID of the dataset that the label belongs to.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`id`\",\n    \"1-1\": \"long\",\n    \"1-2\": \"ID of the label.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`name`\",\n    \"2-1\": \"string\",\n    \"2-2\": \"Name of the label.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`numExamples`\",\n    \"3-1\": \"int\",\n    \"3-2\": \"Number of examples that have the label.\",\n    \"3-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 4\n}\n[/block]\n##Query Parameters##\n\nBy default, this call returns 100 examples. If you want to page through the examples in a dataset, use the `offset` and `count` query parameters.\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Description\",\n    \"h-2\": \"Available Version\",\n    \"0-0\": \"`count`\",\n    \"0-1\": \"Number of examples to return. Optional.\",\n    \"0-2\": \"1.0\",\n    \"1-0\": \"`offset`\",\n    \"1-1\": \"Index of the example from which you want to start paging. Optional.\",\n    \"1-2\": \"1.0\"\n  },\n  \"cols\": 3,\n  \"rows\": 2\n}\n[/block]\nHere's an example of these query parameters. If you omit the `count` parameter, the API returns 100 examples. If you omit the offset parameter, paging starts at 0.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X GET -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" https://api.metamind.io/v1/vision/datasets/57/examples?offset=100&count=50\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\n##How Paging Works##\n\nTo page through all the examples in a dataset:\n\n1. Make the [Get a Dataset](doc:get-a-dataset) call to return the `totalExamples` value for the dataset.\n2. Make the [Get All Examples](doc:get-all-examples) call and pass in the `offset` and `count` values until you reach the end of the examples.\n\nFor example, let's say you have a dataset and you want to display information about the examples in a UI and show 50 at a time. The first call would have `offset=0` and `count=50`, the second call would have `offset=50` and `count=50`, and so on.","category":"57dee8de84019d2000e95af3","createdAt":"2016-09-25T00:04:55.609Z","excerpt":"Returns all the examples for the specified dataset.","githubsync":"","hidden":false,"isReference":true,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":12,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"get-all-examples","sync_unique":"","title":"Get All Examples","type":"get","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

getGet All Examples

Returns all the examples for the specified dataset.

##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`data`", "0-1": "array", "0-2": "Array of `example` objects.", "0-3": "1.0", "1-0": "`object`", "1-1": "string", "1-2": "Object returned; in this case, `list`.", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block] ##Example Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the example was created.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "ID of the example.", "1-3": "1.0", "2-0": "`label`", "2-1": "object", "2-2": "Label that the example is associated with.", "2-3": "1.0", "4-0": "`name`", "4-1": "string", "4-2": "Name of the example.", "4-3": "1.0", "5-0": "`object`", "5-1": "string", "5-2": "Object returned; in this case, `example`.", "5-3": "1.0", "3-0": "`location`", "3-1": "string", "3-3": "1.0", "3-2": "URL of the image in the dataset. This is a temporary URL that expires in 30 minutes. This URL can be used to display in a UI images that were uploaded to a dataset." }, "cols": 4, "rows": 6 } [/block] ##Label Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "0-2": "ID of the dataset that the label belongs to.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "ID of the label.", "1-3": "1.0", "2-0": "`name`", "2-1": "string", "2-2": "Name of the label.", "2-3": "1.0", "3-0": "`numExamples`", "3-1": "int", "3-2": "Number of examples that have the label.", "3-3": "1.0" }, "cols": 4, "rows": 4 } [/block] ##Query Parameters## By default, this call returns 100 examples. If you want to page through the examples in a dataset, use the `offset` and `count` query parameters. [block:parameters] { "data": { "h-0": "Name", "h-1": "Description", "h-2": "Available Version", "0-0": "`count`", "0-1": "Number of examples to return. Optional.", "0-2": "1.0", "1-0": "`offset`", "1-1": "Index of the example from which you want to start paging. Optional.", "1-2": "1.0" }, "cols": 3, "rows": 2 } [/block] Here's an example of these query parameters. If you omit the `count` parameter, the API returns 100 examples. If you omit the offset parameter, paging starts at 0. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/datasets/57/examples?offset=100&count=50", "language": "curl" } ] } [/block] ##How Paging Works## To page through all the examples in a dataset: 1. Make the [Get a Dataset](doc:get-a-dataset) call to return the `totalExamples` value for the dataset. 2. Make the [Get All Examples](doc:get-all-examples) call and pass in the `offset` and `count` values until you reach the end of the examples. For example, let's say you have a dataset and you want to display information about the examples in a UI and show 50 at a time. The first call would have `offset=0` and `count=50`, the second call would have `offset=50` and `count=50`, and so on.

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`data`", "0-1": "array", "0-2": "Array of `example` objects.", "0-3": "1.0", "1-0": "`object`", "1-1": "string", "1-2": "Object returned; in this case, `list`.", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block] ##Example Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the example was created.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "ID of the example.", "1-3": "1.0", "2-0": "`label`", "2-1": "object", "2-2": "Label that the example is associated with.", "2-3": "1.0", "4-0": "`name`", "4-1": "string", "4-2": "Name of the example.", "4-3": "1.0", "5-0": "`object`", "5-1": "string", "5-2": "Object returned; in this case, `example`.", "5-3": "1.0", "3-0": "`location`", "3-1": "string", "3-3": "1.0", "3-2": "URL of the image in the dataset. This is a temporary URL that expires in 30 minutes. This URL can be used to display in a UI images that were uploaded to a dataset." }, "cols": 4, "rows": 6 } [/block] ##Label Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetId`", "0-1": "long", "0-2": "ID of the dataset that the label belongs to.", "0-3": "1.0", "1-0": "`id`", "1-1": "long", "1-2": "ID of the label.", "1-3": "1.0", "2-0": "`name`", "2-1": "string", "2-2": "Name of the label.", "2-3": "1.0", "3-0": "`numExamples`", "3-1": "int", "3-2": "Number of examples that have the label.", "3-3": "1.0" }, "cols": 4, "rows": 4 } [/block] ##Query Parameters## By default, this call returns 100 examples. If you want to page through the examples in a dataset, use the `offset` and `count` query parameters. [block:parameters] { "data": { "h-0": "Name", "h-1": "Description", "h-2": "Available Version", "0-0": "`count`", "0-1": "Number of examples to return. Optional.", "0-2": "1.0", "1-0": "`offset`", "1-1": "Index of the example from which you want to start paging. Optional.", "1-2": "1.0" }, "cols": 3, "rows": 2 } [/block] Here's an example of these query parameters. If you omit the `count` parameter, the API returns 100 examples. If you omit the offset parameter, paging starts at 0. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/datasets/57/examples?offset=100&count=50", "language": "curl" } ] } [/block] ##How Paging Works## To page through all the examples in a dataset: 1. Make the [Get a Dataset](doc:get-a-dataset) call to return the `totalExamples` value for the dataset. 2. Make the [Get All Examples](doc:get-all-examples) call and pass in the `offset` and `count` values until you reach the end of the examples. For example, let's say you have a dataset and you want to display information about the examples in a UI and show 50 at a time. The first call would have `offset=0` and `count=50`, the second call would have `offset=50` and `count=50`, and so on.
{"__v":1,"_id":"57e714e8da4e2e0e007fbbfc","api":{"auth":"required","examples":{"codes":[{"language":"curl","code":"curl -X DELETE -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/datasets/108/examples/43555"}]},"method":"delete","params":[],"results":{"codes":[{"name":"","code":"{}","language":"json","status":204},{"name":"","code":"{\n  \"message\": \"Deleting examples from datasets after training has started is not supported\"\n}","language":"json","status":400}]},"settings":"","url":"/vision/datasets/<DATASET_ID>/examples/<EXAMPLE_ID>"},"body":"This call doesn’t return a response body. Instead, it returns an HTTP status code 204.","category":"57dee8de84019d2000e95af3","createdAt":"2016-09-25T00:06:00.766Z","excerpt":"Deletes the specified example.","githubsync":"","hidden":false,"isReference":true,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":13,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"delete-an-example","sync_unique":"","title":"Delete an Example","type":"delete","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

deleteDelete an Example

Deletes the specified example.

This call doesn’t return a response body. Instead, it returns an HTTP status code 204.

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



This call doesn’t return a response body. Instead, it returns an HTTP status code 204.
{"__v":1,"_id":"57e717e15f33650e00763875","api":{"auth":"required","examples":{"codes":[{"language":"curl","code":"curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=Beach Mountain Model\" -F \"datasetId=57\" https://api.metamind.io/v1/vision/train"}]},"method":"post","params":[],"results":{"codes":[{"status":200,"language":"json","code":"{\n  \"datasetId\": 57,\n  \"datasetVersionId\": 0,\n  \"name\": \"Beach and Mountain Model\",\n  \"status\": \"QUEUED\",\n  \"progress\": 0,\n  \"createdAt\": \"2016-09-16T18:03:21.000+0000\",\n  \"updatedAt\": \"2016-09-16T18:03:21.000+0000\",\n  \"learningRate\": 0.001,\n  \"epochs\": 3,\n  \"queuePosition\": 1,\n  \"object\": \"training\",\n  \"modelId\": \"7JXCXTRXTMNLJCEF2DR5CJ46QU\",\n  \"trainParams\": null,\n  \"trainStats\": null,\n  \"modelType\": \"image\"\n}","name":""},{"status":400,"language":"json","code":"{\n  \"message\": \"Train job not yet completed successfully\"\n}","name":""}]},"settings":"","url":"/vision/train"},"body":"##Request Parameters##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"`datasetId`\",\n    \"0-1\": \"long\",\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-2\": \"ID of the dataset to train.\",\n    \"0-3\": \"1.0\",\n    \"2-0\": \"`learningRate`\",\n    \"1-0\": \"`epochs`\",\n    \"1-1\": \"int\",\n    \"2-1\": \"float\",\n    \"2-2\": \"Optional. Specifies how much the gradient affects the optimization of the model at each time step. Use this parameter to tune your model. Valid values are between 0.0001 and 0.01. If not specified, the default is 0.0001. We recommend keeping this value between 0.0001 and 0.001.\",\n    \"1-2\": \"Optional. Number of training iterations for the neural network. Valid values are 1–100. If not specified, the default is calculated based on the dataset size. The larger the number, the longer the training takes to complete.\",\n    \"1-3\": \"1.0\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`name`\",\n    \"3-2\": \"Name of the model. Maximum length is 180 characters.\",\n    \"3-1\": \"string\",\n    \"3-3\": \"1.0\",\n    \"4-0\": \"`trainParams`\",\n    \"4-1\": \"object\",\n    \"4-3\": \"1.0\",\n    \"4-2\": \"Optional. JSON that contains parameters that specify how the model is created. Valid values:\\n- `{\\\"trainSplitRatio\\\": 0.n}`—Lets you specify the ratio of data used to train the dataset and the data used to test the model. The default split ratio is 0.9; 90% of the data is used to train the dataset and create the model and 10% of the data is used to test the model. If you pass in a split ratio of 0.6, then 60% of the data is used to train the dataset and create the model and 40% of the data is used to test the model.\"\n  },\n  \"cols\": 4,\n  \"rows\": 5\n}\n[/block]\nIf you’re unsure which values to set for the `epochs` and `learningRate` parameters, we recommend that you omit them and use the defaults.\n\n##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"1-0\": \"`datasetId`\",\n    \"1-1\": \"long\",\n    \"1-2\": \"ID of the dataset trained to create the model.\",\n    \"1-3\": \"1.0\",\n    \"7-0\": \"`name`\",\n    \"7-1\": \"string\",\n    \"7-2\": \"Name of the model.\",\n    \"7-3\": \"1.0\",\n    \"8-0\": \"`object`\",\n    \"8-1\": \"string\",\n    \"8-2\": \"Object returned; in this case, `training`.\",\n    \"8-3\": \"1.0\",\n    \"0-0\": \"`createdAt`\",\n    \"0-1\": \"date\",\n    \"0-2\": \"Date and time that the model was created.\",\n    \"0-3\": \"1.0\",\n    \"5-0\": \"`modelId`\",\n    \"5-1\": \"string\",\n    \"5-2\": \"ID of the model. Contains letters and numbers.\",\n    \"5-3\": \"1.0\",\n    \"3-0\": \"`epochs`\",\n    \"3-1\": \"int\",\n    \"3-2\": \"Number of epochs used during training.\",\n    \"3-3\": \"1.0\",\n    \"4-0\": \"`learningRate`\",\n    \"4-1\": \"float\",\n    \"4-2\": \"Learning rate used during training.\",\n    \"4-3\": \"1.0\",\n    \"9-0\": \"`progress`\",\n    \"9-1\": \"int\",\n    \"9-2\": \"How far the training job has progressed. Values are between 0–1.\",\n    \"9-3\": \"1.0\",\n    \"10-0\": \"`queuePosition`\",\n    \"10-1\": \"int\",\n    \"10-2\": \"Where the training job is in the queue. This field appears in the response only  if the status is `QUEUED`.\",\n    \"10-3\": \"1.0\",\n    \"11-0\": \"`status`\",\n    \"11-1\": \"string\",\n    \"11-2\": \"Status of the training job. Valid values are:\\n- `QUEUED`—The training job is in the queue.\\n- `RUNNING`—The training job is running.\\n- `SUCCEEDED`—The training job succeeded, and the model was created.\\n- `FAILED`—The training job failed.\",\n    \"14-0\": \"`updatedAt`\",\n    \"14-1\": \"date\",\n    \"14-2\": \"Date and time that the model was last updated.\",\n    \"11-3\": \"1.0\",\n    \"14-3\": \"1.0\",\n    \"2-0\": \"`datasetVersionId`\",\n    \"2-1\": \"int\",\n    \"2-2\": \"N/A\",\n    \"2-3\": \"1.0\",\n    \"12-0\": \"`trainParams`\",\n    \"12-1\": \"object\",\n    \"12-2\": \"Training parameters passed into the request. For example, if you sent in a split of 0.7, the response contains `\\\"trainParams\\\": {\\\"trainSplitRatio\\\": 0.7}`\",\n    \"12-3\": \"1.0\",\n    \"13-0\": \"`trainStats`\",\n    \"13-1\": \"object\",\n    \"13-2\": \"Returns null when you train a dataset. Training statistics are returned when the status is `SUCCEEDED` or `FAILED`.\",\n    \"13-3\": \"1.0\",\n    \"6-0\": \"`modelType`\",\n    \"6-1\": \"string\",\n    \"6-3\": \"1.0\",\n    \"6-2\": \"Type of data from which the model was created. Default is `image`.\"\n  },\n  \"cols\": 4,\n  \"rows\": 15\n}\n[/block]\nThis cURL command sends in the `trainParams` request parameter. This command has double quotes and escaped double quotes around `trainSplitRatio` to run on Windows. You might need to reformat it to run on another OS.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X POST -H \\\"Authorization: Bearer <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" -H \\\"Content-Type: multipart/form-data\\\" -F \\\"name=Beach Mountain Model\\\" -F \\\"datasetId=57\\\" -F \\\"trainParams={\\\\\\\"trainSplitRatio\\\\\\\":0.7}\\\" https://api.metamind.io/v1/vision/train\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]","category":"57dee8de84019d2000e95af3","createdAt":"2016-09-25T00:18:41.425Z","excerpt":"Trains a dataset and creates a model.","githubsync":"","hidden":false,"isReference":true,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":14,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"train-a-dataset","sync_unique":"","title":"Train a Dataset","type":"post","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

postTrain a Dataset

Trains a dataset and creates a model.

##Request Parameters## [block:parameters] { "data": { "0-0": "`datasetId`", "0-1": "long", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "ID of the dataset to train.", "0-3": "1.0", "2-0": "`learningRate`", "1-0": "`epochs`", "1-1": "int", "2-1": "float", "2-2": "Optional. Specifies how much the gradient affects the optimization of the model at each time step. Use this parameter to tune your model. Valid values are between 0.0001 and 0.01. If not specified, the default is 0.0001. We recommend keeping this value between 0.0001 and 0.001.", "1-2": "Optional. Number of training iterations for the neural network. Valid values are 1–100. If not specified, the default is calculated based on the dataset size. The larger the number, the longer the training takes to complete.", "1-3": "1.0", "2-3": "1.0", "3-0": "`name`", "3-2": "Name of the model. Maximum length is 180 characters.", "3-1": "string", "3-3": "1.0", "4-0": "`trainParams`", "4-1": "object", "4-3": "1.0", "4-2": "Optional. JSON that contains parameters that specify how the model is created. Valid values:\n- `{\"trainSplitRatio\": 0.n}`—Lets you specify the ratio of data used to train the dataset and the data used to test the model. The default split ratio is 0.9; 90% of the data is used to train the dataset and create the model and 10% of the data is used to test the model. If you pass in a split ratio of 0.6, then 60% of the data is used to train the dataset and create the model and 40% of the data is used to test the model." }, "cols": 4, "rows": 5 } [/block] If you’re unsure which values to set for the `epochs` and `learningRate` parameters, we recommend that you omit them and use the defaults. ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`datasetId`", "1-1": "long", "1-2": "ID of the dataset trained to create the model.", "1-3": "1.0", "7-0": "`name`", "7-1": "string", "7-2": "Name of the model.", "7-3": "1.0", "8-0": "`object`", "8-1": "string", "8-2": "Object returned; in this case, `training`.", "8-3": "1.0", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the model was created.", "0-3": "1.0", "5-0": "`modelId`", "5-1": "string", "5-2": "ID of the model. Contains letters and numbers.", "5-3": "1.0", "3-0": "`epochs`", "3-1": "int", "3-2": "Number of epochs used during training.", "3-3": "1.0", "4-0": "`learningRate`", "4-1": "float", "4-2": "Learning rate used during training.", "4-3": "1.0", "9-0": "`progress`", "9-1": "int", "9-2": "How far the training job has progressed. Values are between 0–1.", "9-3": "1.0", "10-0": "`queuePosition`", "10-1": "int", "10-2": "Where the training job is in the queue. This field appears in the response only if the status is `QUEUED`.", "10-3": "1.0", "11-0": "`status`", "11-1": "string", "11-2": "Status of the training job. Valid values are:\n- `QUEUED`—The training job is in the queue.\n- `RUNNING`—The training job is running.\n- `SUCCEEDED`—The training job succeeded, and the model was created.\n- `FAILED`—The training job failed.", "14-0": "`updatedAt`", "14-1": "date", "14-2": "Date and time that the model was last updated.", "11-3": "1.0", "14-3": "1.0", "2-0": "`datasetVersionId`", "2-1": "int", "2-2": "N/A", "2-3": "1.0", "12-0": "`trainParams`", "12-1": "object", "12-2": "Training parameters passed into the request. For example, if you sent in a split of 0.7, the response contains `\"trainParams\": {\"trainSplitRatio\": 0.7}`", "12-3": "1.0", "13-0": "`trainStats`", "13-1": "object", "13-2": "Returns null when you train a dataset. Training statistics are returned when the status is `SUCCEEDED` or `FAILED`.", "13-3": "1.0", "6-0": "`modelType`", "6-1": "string", "6-3": "1.0", "6-2": "Type of data from which the model was created. Default is `image`." }, "cols": 4, "rows": 15 } [/block] This cURL command sends in the `trainParams` request parameter. This command has double quotes and escaped double quotes around `trainSplitRatio` to run on Windows. You might need to reformat it to run on another OS. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=Beach Mountain Model\" -F \"datasetId=57\" -F \"trainParams={\\\"trainSplitRatio\\\":0.7}\" https://api.metamind.io/v1/vision/train", "language": "curl" } ] } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



##Request Parameters## [block:parameters] { "data": { "0-0": "`datasetId`", "0-1": "long", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "ID of the dataset to train.", "0-3": "1.0", "2-0": "`learningRate`", "1-0": "`epochs`", "1-1": "int", "2-1": "float", "2-2": "Optional. Specifies how much the gradient affects the optimization of the model at each time step. Use this parameter to tune your model. Valid values are between 0.0001 and 0.01. If not specified, the default is 0.0001. We recommend keeping this value between 0.0001 and 0.001.", "1-2": "Optional. Number of training iterations for the neural network. Valid values are 1–100. If not specified, the default is calculated based on the dataset size. The larger the number, the longer the training takes to complete.", "1-3": "1.0", "2-3": "1.0", "3-0": "`name`", "3-2": "Name of the model. Maximum length is 180 characters.", "3-1": "string", "3-3": "1.0", "4-0": "`trainParams`", "4-1": "object", "4-3": "1.0", "4-2": "Optional. JSON that contains parameters that specify how the model is created. Valid values:\n- `{\"trainSplitRatio\": 0.n}`—Lets you specify the ratio of data used to train the dataset and the data used to test the model. The default split ratio is 0.9; 90% of the data is used to train the dataset and create the model and 10% of the data is used to test the model. If you pass in a split ratio of 0.6, then 60% of the data is used to train the dataset and create the model and 40% of the data is used to test the model." }, "cols": 4, "rows": 5 } [/block] If you’re unsure which values to set for the `epochs` and `learningRate` parameters, we recommend that you omit them and use the defaults. ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`datasetId`", "1-1": "long", "1-2": "ID of the dataset trained to create the model.", "1-3": "1.0", "7-0": "`name`", "7-1": "string", "7-2": "Name of the model.", "7-3": "1.0", "8-0": "`object`", "8-1": "string", "8-2": "Object returned; in this case, `training`.", "8-3": "1.0", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the model was created.", "0-3": "1.0", "5-0": "`modelId`", "5-1": "string", "5-2": "ID of the model. Contains letters and numbers.", "5-3": "1.0", "3-0": "`epochs`", "3-1": "int", "3-2": "Number of epochs used during training.", "3-3": "1.0", "4-0": "`learningRate`", "4-1": "float", "4-2": "Learning rate used during training.", "4-3": "1.0", "9-0": "`progress`", "9-1": "int", "9-2": "How far the training job has progressed. Values are between 0–1.", "9-3": "1.0", "10-0": "`queuePosition`", "10-1": "int", "10-2": "Where the training job is in the queue. This field appears in the response only if the status is `QUEUED`.", "10-3": "1.0", "11-0": "`status`", "11-1": "string", "11-2": "Status of the training job. Valid values are:\n- `QUEUED`—The training job is in the queue.\n- `RUNNING`—The training job is running.\n- `SUCCEEDED`—The training job succeeded, and the model was created.\n- `FAILED`—The training job failed.", "14-0": "`updatedAt`", "14-1": "date", "14-2": "Date and time that the model was last updated.", "11-3": "1.0", "14-3": "1.0", "2-0": "`datasetVersionId`", "2-1": "int", "2-2": "N/A", "2-3": "1.0", "12-0": "`trainParams`", "12-1": "object", "12-2": "Training parameters passed into the request. For example, if you sent in a split of 0.7, the response contains `\"trainParams\": {\"trainSplitRatio\": 0.7}`", "12-3": "1.0", "13-0": "`trainStats`", "13-1": "object", "13-2": "Returns null when you train a dataset. Training statistics are returned when the status is `SUCCEEDED` or `FAILED`.", "13-3": "1.0", "6-0": "`modelType`", "6-1": "string", "6-3": "1.0", "6-2": "Type of data from which the model was created. Default is `image`." }, "cols": 4, "rows": 15 } [/block] This cURL command sends in the `trainParams` request parameter. This command has double quotes and escaped double quotes around `trainSplitRatio` to run on Windows. You might need to reformat it to run on another OS. [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"name=Beach Mountain Model\" -F \"datasetId=57\" -F \"trainParams={\\\"trainSplitRatio\\\":0.7}\" https://api.metamind.io/v1/vision/train", "language": "curl" } ] } [/block]
{"__v":1,"_id":"57e7199e68bab10e006c5254","api":{"auth":"required","examples":{"codes":[{"language":"curl","code":"curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/train/7JXCXTRXTMNLJCEF2DR5CJ46QU"}]},"method":"get","params":[],"results":{"codes":[{"name":"","code":"{\n  \"datasetId\": 57,\n  \"datasetVersionId\": 0,\n  \"name\": \"Beach and Mountain Model\",\n  \"status\": \"SUCCEEDED\",\n  \"progress\": 1,\n  \"createdAt\": \"2016-09-16T18:03:21.000+0000\",\n  \"updatedAt\": \"2016-09-16T18:03:21.000+0000\",\n  \"learningRate\": 0.001,\n  \"epochs\": 3,\n  \"object\": \"training\",\n  \"modelId\": \"7JXCXTRXTMNLJCEF2DR5CJ46QU\",\n  \"trainParams\": {\"trainSplitRatio\": 0.7},\n  \"trainStats\": {\n    \"labels\": 2,\n    \"examples\": 99,\n    \"totalTime\": \"00:01:35:171\",\n    \"trainingTime\": \"00:01:32:259\",\n    \"lastEpochDone\": 3,\n    \"modelSaveTime\": \"00:00:02:667\",\n    \"testSplitSize\": 33,\n    \"trainSplitSize\": 66,\n    \"datasetLoadTime\": \"00:00:02:893\"\n  },\n  \"modelType\": \"image\"\n}","language":"json","status":200},{"name":"","code":"{}","language":"json","status":400}]},"settings":"","url":"/vision/train/<MODEL_ID>"},"body":"##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"1-0\": \"`datasetId`\",\n    \"1-1\": \"long\",\n    \"1-2\": \"ID of the dataset trained to create the model.\",\n    \"1-3\": \"1.0\",\n    \"8-0\": \"`name`\",\n    \"8-1\": \"string\",\n    \"8-2\": \"Name of the model.\",\n    \"8-3\": \"1.0\",\n    \"9-0\": \"`object`\",\n    \"9-1\": \"string\",\n    \"9-2\": \"Object returned; in this case, `training`.\",\n    \"9-3\": \"1.0\",\n    \"0-0\": \"`createdAt`\",\n    \"0-1\": \"date\",\n    \"0-2\": \"Date and time that the model was created.\",\n    \"0-3\": \"1.0\",\n    \"6-0\": \"`modelId`\",\n    \"6-1\": \"string\",\n    \"6-2\": \"ID of the model. Contains letters and numbers.\",\n    \"6-3\": \"1.0\",\n    \"3-0\": \"`epochs`\",\n    \"3-1\": \"int\",\n    \"3-2\": \"Number of epochs used during training.\",\n    \"3-3\": \"1.0\",\n    \"5-0\": \"`learningRate`\",\n    \"5-1\": \"float\",\n    \"5-2\": \"Learning rate used during training.\",\n    \"5-3\": \"1.0\",\n    \"10-0\": \"`progress`\",\n    \"10-1\": \"int\",\n    \"10-2\": \"How far the training job has progressed. Values are between 0–1.\",\n    \"10-3\": \"1.0\",\n    \"11-0\": \"`queuePosition`\",\n    \"11-1\": \"int\",\n    \"11-2\": \"Where the training job is in the queue. This field appears in the response only  if the status is `QUEUED`.\",\n    \"11-3\": \"1.0\",\n    \"12-0\": \"`status`\",\n    \"12-1\": \"string\",\n    \"12-2\": \"Status of the training job. Valid values are:\\n- `QUEUED`—The training job is in the queue.\\n- `RUNNING`—The training job is running.\\n- `SUCCEEDED`—The training job succeeded, and you can use the model.\\n- `FAILED`—The training job failed.\",\n    \"15-0\": \"`updatedAt`\",\n    \"15-1\": \"string\",\n    \"15-2\": \"Date and time that the model was last updated.\",\n    \"12-3\": \"1.0\",\n    \"15-3\": \"1.0\",\n    \"4-0\": \"`failureMsg`\",\n    \"4-1\": \"string\",\n    \"4-2\": \"Reason the dataset training failed. Returned only if the training status is `FAILED`.\",\n    \"4-3\": \"1.0\",\n    \"2-0\": \"`datasetVersionId`\",\n    \"2-1\": \"int\",\n    \"2-2\": \"N/A\",\n    \"2-3\": \"1.0\",\n    \"13-0\": \"`trainParams`\",\n    \"13-1\": \"string\",\n    \"13-2\": \"Training parameters passed into the request.\",\n    \"13-3\": \"1.0\",\n    \"14-0\": \"`trainStats`\",\n    \"14-1\": \"object\",\n    \"14-2\": \"Training statistics.\",\n    \"14-3\": \"1.0\",\n    \"7-0\": \"`modelType`\",\n    \"7-1\": \"string\",\n    \"7-2\": \"Type of data from which the model was created. Default is `image`.\",\n    \"7-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 16\n}\n[/block]\n##TrainStats Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`datasetLoadTime`\",\n    \"1-0\": \"`examples`\",\n    \"2-0\": \"`labels`\",\n    \"3-0\": \"`lastEpochDone`\",\n    \"4-0\": \"`modelSaveTime`\",\n    \"5-0\": \"`testSplitSize`\",\n    \"6-0\": \"`totalTime`\",\n    \"7-0\": \"`trainingTime`\",\n    \"8-0\": \"`trainSplitSize`\",\n    \"0-3\": \"1.0\",\n    \"1-3\": \"1.0\",\n    \"2-3\": \"1.0\",\n    \"3-3\": \"1.0\",\n    \"4-3\": \"1.0\",\n    \"5-3\": \"1.0\",\n    \"6-3\": \"1.0\",\n    \"7-3\": \"1.0\",\n    \"8-3\": \"1.0\",\n    \"1-1\": \"int\",\n    \"2-1\": \"int\",\n    \"1-2\": \"Total number of examples in the dataset from which the model was created.\",\n    \"2-2\": \"Total number of labels in the dataset from which the model was created.\",\n    \"3-1\": \"int\",\n    \"0-1\": \"string \\nin HH:MM:SS:SSS format\",\n    \"0-2\": \"Time it took to load the dataset to be trained.\",\n    \"4-1\": \"string \\nin HH:MM:SS:SSS format\",\n    \"6-1\": \"string \\nin HH:MM:SS:SSS format\",\n    \"5-1\": \"int\",\n    \"7-1\": \"string \\nin HH:MM:SS:SSS format\",\n    \"8-1\": \"int\",\n    \"3-2\": \"Number of the last training iteration that was completed.\",\n    \"5-2\": \"Number of examples (from the dataset total number of examples) used to test the model. `testSplitSize` + `trainSplitSize` is equal to `examples`.\",\n    \"8-2\": \"Number of examples (from the dataset total number of examples) used to train the model. `trainSplitSize` + `testSplitSize` is equal to `examples`.\",\n    \"4-2\": \"Time it took to save the model.\",\n    \"6-2\": \"Total training time: `datasetLoadTime` + `trainingTime` + `modelSaveTime`\",\n    \"7-2\": \"Time it took to train the dataset to create the model.\"\n  },\n  \"cols\": 4,\n  \"rows\": 9\n}\n[/block]","category":"57dee8de84019d2000e95af3","createdAt":"2016-09-25T00:26:06.156Z","excerpt":"Returns the status of a training job. Use the progress field to determine how far the training has progressed. When training completes successfully, the status is `SUCCEEDED` and the progress is 1.","githubsync":"","hidden":false,"isReference":true,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":15,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"get-training-status","sync_unique":"","title":"Get Training Status","type":"get","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

getGet Training Status

Returns the status of a training job. Use the progress field to determine how far the training has progressed. When training completes successfully, the status is `SUCCEEDED` and the progress is 1.

##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`datasetId`", "1-1": "long", "1-2": "ID of the dataset trained to create the model.", "1-3": "1.0", "8-0": "`name`", "8-1": "string", "8-2": "Name of the model.", "8-3": "1.0", "9-0": "`object`", "9-1": "string", "9-2": "Object returned; in this case, `training`.", "9-3": "1.0", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the model was created.", "0-3": "1.0", "6-0": "`modelId`", "6-1": "string", "6-2": "ID of the model. Contains letters and numbers.", "6-3": "1.0", "3-0": "`epochs`", "3-1": "int", "3-2": "Number of epochs used during training.", "3-3": "1.0", "5-0": "`learningRate`", "5-1": "float", "5-2": "Learning rate used during training.", "5-3": "1.0", "10-0": "`progress`", "10-1": "int", "10-2": "How far the training job has progressed. Values are between 0–1.", "10-3": "1.0", "11-0": "`queuePosition`", "11-1": "int", "11-2": "Where the training job is in the queue. This field appears in the response only if the status is `QUEUED`.", "11-3": "1.0", "12-0": "`status`", "12-1": "string", "12-2": "Status of the training job. Valid values are:\n- `QUEUED`—The training job is in the queue.\n- `RUNNING`—The training job is running.\n- `SUCCEEDED`—The training job succeeded, and you can use the model.\n- `FAILED`—The training job failed.", "15-0": "`updatedAt`", "15-1": "string", "15-2": "Date and time that the model was last updated.", "12-3": "1.0", "15-3": "1.0", "4-0": "`failureMsg`", "4-1": "string", "4-2": "Reason the dataset training failed. Returned only if the training status is `FAILED`.", "4-3": "1.0", "2-0": "`datasetVersionId`", "2-1": "int", "2-2": "N/A", "2-3": "1.0", "13-0": "`trainParams`", "13-1": "string", "13-2": "Training parameters passed into the request.", "13-3": "1.0", "14-0": "`trainStats`", "14-1": "object", "14-2": "Training statistics.", "14-3": "1.0", "7-0": "`modelType`", "7-1": "string", "7-2": "Type of data from which the model was created. Default is `image`.", "7-3": "1.0" }, "cols": 4, "rows": 16 } [/block] ##TrainStats Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetLoadTime`", "1-0": "`examples`", "2-0": "`labels`", "3-0": "`lastEpochDone`", "4-0": "`modelSaveTime`", "5-0": "`testSplitSize`", "6-0": "`totalTime`", "7-0": "`trainingTime`", "8-0": "`trainSplitSize`", "0-3": "1.0", "1-3": "1.0", "2-3": "1.0", "3-3": "1.0", "4-3": "1.0", "5-3": "1.0", "6-3": "1.0", "7-3": "1.0", "8-3": "1.0", "1-1": "int", "2-1": "int", "1-2": "Total number of examples in the dataset from which the model was created.", "2-2": "Total number of labels in the dataset from which the model was created.", "3-1": "int", "0-1": "string \nin HH:MM:SS:SSS format", "0-2": "Time it took to load the dataset to be trained.", "4-1": "string \nin HH:MM:SS:SSS format", "6-1": "string \nin HH:MM:SS:SSS format", "5-1": "int", "7-1": "string \nin HH:MM:SS:SSS format", "8-1": "int", "3-2": "Number of the last training iteration that was completed.", "5-2": "Number of examples (from the dataset total number of examples) used to test the model. `testSplitSize` + `trainSplitSize` is equal to `examples`.", "8-2": "Number of examples (from the dataset total number of examples) used to train the model. `trainSplitSize` + `testSplitSize` is equal to `examples`.", "4-2": "Time it took to save the model.", "6-2": "Total training time: `datasetLoadTime` + `trainingTime` + `modelSaveTime`", "7-2": "Time it took to train the dataset to create the model." }, "cols": 4, "rows": 9 } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`datasetId`", "1-1": "long", "1-2": "ID of the dataset trained to create the model.", "1-3": "1.0", "8-0": "`name`", "8-1": "string", "8-2": "Name of the model.", "8-3": "1.0", "9-0": "`object`", "9-1": "string", "9-2": "Object returned; in this case, `training`.", "9-3": "1.0", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the model was created.", "0-3": "1.0", "6-0": "`modelId`", "6-1": "string", "6-2": "ID of the model. Contains letters and numbers.", "6-3": "1.0", "3-0": "`epochs`", "3-1": "int", "3-2": "Number of epochs used during training.", "3-3": "1.0", "5-0": "`learningRate`", "5-1": "float", "5-2": "Learning rate used during training.", "5-3": "1.0", "10-0": "`progress`", "10-1": "int", "10-2": "How far the training job has progressed. Values are between 0–1.", "10-3": "1.0", "11-0": "`queuePosition`", "11-1": "int", "11-2": "Where the training job is in the queue. This field appears in the response only if the status is `QUEUED`.", "11-3": "1.0", "12-0": "`status`", "12-1": "string", "12-2": "Status of the training job. Valid values are:\n- `QUEUED`—The training job is in the queue.\n- `RUNNING`—The training job is running.\n- `SUCCEEDED`—The training job succeeded, and you can use the model.\n- `FAILED`—The training job failed.", "15-0": "`updatedAt`", "15-1": "string", "15-2": "Date and time that the model was last updated.", "12-3": "1.0", "15-3": "1.0", "4-0": "`failureMsg`", "4-1": "string", "4-2": "Reason the dataset training failed. Returned only if the training status is `FAILED`.", "4-3": "1.0", "2-0": "`datasetVersionId`", "2-1": "int", "2-2": "N/A", "2-3": "1.0", "13-0": "`trainParams`", "13-1": "string", "13-2": "Training parameters passed into the request.", "13-3": "1.0", "14-0": "`trainStats`", "14-1": "object", "14-2": "Training statistics.", "14-3": "1.0", "7-0": "`modelType`", "7-1": "string", "7-2": "Type of data from which the model was created. Default is `image`.", "7-3": "1.0" }, "cols": 4, "rows": 16 } [/block] ##TrainStats Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`datasetLoadTime`", "1-0": "`examples`", "2-0": "`labels`", "3-0": "`lastEpochDone`", "4-0": "`modelSaveTime`", "5-0": "`testSplitSize`", "6-0": "`totalTime`", "7-0": "`trainingTime`", "8-0": "`trainSplitSize`", "0-3": "1.0", "1-3": "1.0", "2-3": "1.0", "3-3": "1.0", "4-3": "1.0", "5-3": "1.0", "6-3": "1.0", "7-3": "1.0", "8-3": "1.0", "1-1": "int", "2-1": "int", "1-2": "Total number of examples in the dataset from which the model was created.", "2-2": "Total number of labels in the dataset from which the model was created.", "3-1": "int", "0-1": "string \nin HH:MM:SS:SSS format", "0-2": "Time it took to load the dataset to be trained.", "4-1": "string \nin HH:MM:SS:SSS format", "6-1": "string \nin HH:MM:SS:SSS format", "5-1": "int", "7-1": "string \nin HH:MM:SS:SSS format", "8-1": "int", "3-2": "Number of the last training iteration that was completed.", "5-2": "Number of examples (from the dataset total number of examples) used to test the model. `testSplitSize` + `trainSplitSize` is equal to `examples`.", "8-2": "Number of examples (from the dataset total number of examples) used to train the model. `trainSplitSize` + `testSplitSize` is equal to `examples`.", "4-2": "Time it took to save the model.", "6-2": "Total training time: `datasetLoadTime` + `trainingTime` + `modelSaveTime`", "7-2": "Time it took to train the dataset to create the model." }, "cols": 4, "rows": 9 } [/block]
{"__v":1,"_id":"57e71b1d5f33650e00763876","api":{"auth":"required","examples":{"codes":[{"code":"curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/models/7JXCXTRXTMNLJCEF2DR5CJ46QU","language":"curl"}]},"method":"get","params":[],"results":{"codes":[{"status":200,"language":"json","code":"{\n  \"metricsData\": {\n    \"f1\": [\n      0.9090909090909092,\n      0.9411764705882352\n    ],\n    \"labels\": [\n      \"beach\",\n      \"mountain\"\n    ],\n    \"testAccuracy\": 0.9286,\n    \"trainingLoss\": 0.021,\n    \"confusionMatrix\": [\n      [\n        5,\n        0\n      ],\n      [\n        1,\n        8\n      ]\n    ],\n    \"trainingAccuracy\": 0.9941\n  },\n  \"createdAt\": \"2016-09-16T18:04:59.000+0000\",\n  \"id\": \"7JXCXTRXTMNLJCEF2DR5CJ46QU\",\n  \"object\": \"metrics\"\n}","name":""},{"status":400,"language":"json","code":"{\n  \"message\": \"Train job not yet completed successfully\"\n}","name":""}]},"settings":"","url":"/vision/models/<MODEL_ID>"},"body":"##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`createdAt`\",\n    \"0-1\": \"date\",\n    \"0-2\": \"Date and time that the model was created.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`id`\",\n    \"1-1\": \"string\",\n    \"1-2\": \"ID of the model. Contains letters and numbers.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`metricsData`\",\n    \"2-1\": \"object\",\n    \"2-2\": \"Model metrics values.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`object`\",\n    \"3-1\": \"string\",\n    \"3-2\": \"Object returned; in this case, `metrics`.\",\n    \"3-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 4\n}\n[/block]\n##MetricsData Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`confusionMatrix`\",\n    \"0-1\": \"array\",\n    \"0-2\": \"Array of integers that contains the correct and incorrect classifications for each label in the dataset based on testing done during the training process.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`f1`\",\n    \"1-1\": \"array\",\n    \"1-2\": \"Array of floats that contains the weighted average of precision and recall for each label in the dataset. The corresponding label for each value in this array can be found in the `labels` array. For example, the first f1 score in the `f1` array corresponds to the first label in the `labels` array.\",\n    \"1-3\": \"1.0\",\n    \"3-0\": \"`testAccuracy`\",\n    \"3-1\": \"float\",\n    \"3-2\": \"Accuracy of the test data. From your initial dataset, by default, 10% of the data is set aside and isn't used during training to create the model. This 10% is then sent to the model for prediction. How often the correct prediction is made with this 10% is reported as `testAccuracy`.\",\n    \"3-3\": \"1.0\",\n    \"4-0\": \"`trainingAccuracy`\",\n    \"4-1\": \"float\",\n    \"4-2\": \"Accuracy of the training data. By default, 90% of the data from your dataset is left after the test accuracy set is set aside. This 90% is then sent to the model for prediction. How often the correct prediction is made with this 90% is reported as `trainingAccuracy`.\",\n    \"4-3\": \"1.0\",\n    \"5-0\": \"`trainingLoss`\",\n    \"5-1\": \"float\",\n    \"5-2\": \"Summary of the errors made in predictions using the training and validation data. The lower the number value, the more accurate the model.\",\n    \"5-3\": \"1.0\",\n    \"2-0\": \"`labels`\",\n    \"2-1\": \"array\",\n    \"2-2\": \"Array of strings that contains the dataset labels. These labels correspond to the values in the `f1` array and the `confusionMatrix` array.\",\n    \"2-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 6\n}\n[/block]\nUse the `labels` array and the `confusionMatrix` array to build the confusion matrix for a model. The labels in the array become the matrix rows and columns. Here's what the confusion matrix for the example results looks like.\n[block:parameters]\n{\n  \"data\": {\n    \"0-1\": \"5\",\n    \"0-2\": \"0\",\n    \"h-1\": \"beach\",\n    \"h-2\": \"mountain\",\n    \"1-0\": \"**mountain**\",\n    \"0-0\": \"**beach**\",\n    \"1-1\": \"1\",\n    \"1-2\": \"8\"\n  },\n  \"cols\": 3,\n  \"rows\": 2\n}\n[/block]","category":"57dee8de84019d2000e95af3","createdAt":"2016-09-25T00:32:29.792Z","excerpt":"Returns the metrics for a model, such as the f1 score, accuracy, and confusion matrix. The combination of these metrics gives you a picture of model accuracy and how well it will perform. This returns the metrics for the last epoch in the training used to create the model. To see the metrics for each epoch, see [Get Model Learning Curve](doc:get-model-learning-curve).","githubsync":"","hidden":false,"isReference":true,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":16,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"get-model-metrics","sync_unique":"","title":"Get Model Metrics","type":"get","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

getGet Model Metrics

Returns the metrics for a model, such as the f1 score, accuracy, and confusion matrix. The combination of these metrics gives you a picture of model accuracy and how well it will perform. This returns the metrics for the last epoch in the training used to create the model. To see the metrics for each epoch, see [Get Model Learning Curve](doc:get-model-learning-curve).

##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the model was created.", "0-3": "1.0", "1-0": "`id`", "1-1": "string", "1-2": "ID of the model. Contains letters and numbers.", "1-3": "1.0", "2-0": "`metricsData`", "2-1": "object", "2-2": "Model metrics values.", "2-3": "1.0", "3-0": "`object`", "3-1": "string", "3-2": "Object returned; in this case, `metrics`.", "3-3": "1.0" }, "cols": 4, "rows": 4 } [/block] ##MetricsData Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`confusionMatrix`", "0-1": "array", "0-2": "Array of integers that contains the correct and incorrect classifications for each label in the dataset based on testing done during the training process.", "0-3": "1.0", "1-0": "`f1`", "1-1": "array", "1-2": "Array of floats that contains the weighted average of precision and recall for each label in the dataset. The corresponding label for each value in this array can be found in the `labels` array. For example, the first f1 score in the `f1` array corresponds to the first label in the `labels` array.", "1-3": "1.0", "3-0": "`testAccuracy`", "3-1": "float", "3-2": "Accuracy of the test data. From your initial dataset, by default, 10% of the data is set aside and isn't used during training to create the model. This 10% is then sent to the model for prediction. How often the correct prediction is made with this 10% is reported as `testAccuracy`.", "3-3": "1.0", "4-0": "`trainingAccuracy`", "4-1": "float", "4-2": "Accuracy of the training data. By default, 90% of the data from your dataset is left after the test accuracy set is set aside. This 90% is then sent to the model for prediction. How often the correct prediction is made with this 90% is reported as `trainingAccuracy`.", "4-3": "1.0", "5-0": "`trainingLoss`", "5-1": "float", "5-2": "Summary of the errors made in predictions using the training and validation data. The lower the number value, the more accurate the model.", "5-3": "1.0", "2-0": "`labels`", "2-1": "array", "2-2": "Array of strings that contains the dataset labels. These labels correspond to the values in the `f1` array and the `confusionMatrix` array.", "2-3": "1.0" }, "cols": 4, "rows": 6 } [/block] Use the `labels` array and the `confusionMatrix` array to build the confusion matrix for a model. The labels in the array become the matrix rows and columns. Here's what the confusion matrix for the example results looks like. [block:parameters] { "data": { "0-1": "5", "0-2": "0", "h-1": "beach", "h-2": "mountain", "1-0": "**mountain**", "0-0": "**beach**", "1-1": "1", "1-2": "8" }, "cols": 3, "rows": 2 } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the model was created.", "0-3": "1.0", "1-0": "`id`", "1-1": "string", "1-2": "ID of the model. Contains letters and numbers.", "1-3": "1.0", "2-0": "`metricsData`", "2-1": "object", "2-2": "Model metrics values.", "2-3": "1.0", "3-0": "`object`", "3-1": "string", "3-2": "Object returned; in this case, `metrics`.", "3-3": "1.0" }, "cols": 4, "rows": 4 } [/block] ##MetricsData Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`confusionMatrix`", "0-1": "array", "0-2": "Array of integers that contains the correct and incorrect classifications for each label in the dataset based on testing done during the training process.", "0-3": "1.0", "1-0": "`f1`", "1-1": "array", "1-2": "Array of floats that contains the weighted average of precision and recall for each label in the dataset. The corresponding label for each value in this array can be found in the `labels` array. For example, the first f1 score in the `f1` array corresponds to the first label in the `labels` array.", "1-3": "1.0", "3-0": "`testAccuracy`", "3-1": "float", "3-2": "Accuracy of the test data. From your initial dataset, by default, 10% of the data is set aside and isn't used during training to create the model. This 10% is then sent to the model for prediction. How often the correct prediction is made with this 10% is reported as `testAccuracy`.", "3-3": "1.0", "4-0": "`trainingAccuracy`", "4-1": "float", "4-2": "Accuracy of the training data. By default, 90% of the data from your dataset is left after the test accuracy set is set aside. This 90% is then sent to the model for prediction. How often the correct prediction is made with this 90% is reported as `trainingAccuracy`.", "4-3": "1.0", "5-0": "`trainingLoss`", "5-1": "float", "5-2": "Summary of the errors made in predictions using the training and validation data. The lower the number value, the more accurate the model.", "5-3": "1.0", "2-0": "`labels`", "2-1": "array", "2-2": "Array of strings that contains the dataset labels. These labels correspond to the values in the `f1` array and the `confusionMatrix` array.", "2-3": "1.0" }, "cols": 4, "rows": 6 } [/block] Use the `labels` array and the `confusionMatrix` array to build the confusion matrix for a model. The labels in the array become the matrix rows and columns. Here's what the confusion matrix for the example results looks like. [block:parameters] { "data": { "0-1": "5", "0-2": "0", "h-1": "beach", "h-2": "mountain", "1-0": "**mountain**", "0-0": "**beach**", "1-1": "1", "1-2": "8" }, "cols": 3, "rows": 2 } [/block]
{"__v":0,"_id":"58c832025f67972500ddb289","api":{"auth":"required","examples":{"codes":[{"code":"curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/models/7JXCXTRXTMNLJCEF2DR5CJ46QU/lc","language":"curl"}]},"method":"get","params":[],"results":{"codes":[{"name":"","code":"// epochResults values for epoch 2 omitted for brevity\n{\n    \"object\": \"list\",\n    \"data\": [\n        {\n            \"epoch\": 1,\n            \"metricsData\": {\n                \"f1\": [\n                    0.7499999999999999,\n                    0.8333333333333333\n                ],\n                \"labels\": [\n                    \"Mountains\",\n                    \"Beaches\"\n                ],\n                \"testAccuracy\": 0.8,\n                \"trainingLoss\": 0.2573,\n                \"confusionMatrix\": [\n                    [\n                        3,\n                        2\n                    ],\n                    [\n                        0,\n                        5\n                    ]\n                ],\n                \"trainingAccuracy\": 0.8809\n            },\n            \"epochResults\": [\n                {\n                    \"exampleName\": \"601053842.jpg-Mountains\",\n                    \"expectedLabel\": \"Mountains\",\n                    \"predictedLabel\": \"Mountains\"\n                },\n                {\n                    \"exampleName\": \"578339672.jpg-Mountains\",\n                    \"expectedLabel\": \"Mountains\",\n                    \"predictedLabel\": \"Mountains\"\n                },\n                {\n                    \"exampleName\": \"549525751.jpg-Mountains\",\n                    \"expectedLabel\": \"Mountains\",\n                    \"predictedLabel\": \"Beaches\"\n                },\n                {\n                    \"exampleName\": \"521811667.jpg-Mountains\",\n                    \"expectedLabel\": \"Mountains\",\n                    \"predictedLabel\": \"Beaches\"\n                },\n                {\n                    \"exampleName\": \"479111308.jpg-Mountains\",\n                    \"expectedLabel\": \"Mountains\",\n                    \"predictedLabel\": \"Mountains\"\n                },\n                {\n                    \"exampleName\": \"155304150.jpg-Beaches\",\n                    \"expectedLabel\": \"Beaches\",\n                    \"predictedLabel\": \"Beaches\"\n                },\n                {\n                    \"exampleName\": \"566675649.jpg-Beaches\",\n                    \"expectedLabel\": \"Beaches\",\n                    \"predictedLabel\": \"Beaches\"\n                },\n                {\n                    \"exampleName\": \"175551857.jpg-Beaches\",\n                    \"expectedLabel\": \"Beaches\",\n                    \"predictedLabel\": \"Beaches\"\n                },\n                {\n                    \"exampleName\": \"459120189.jpg-Beaches\",\n                    \"expectedLabel\": \"Beaches\",\n                    \"predictedLabel\": \"Beaches\"\n                },\n                {\n                    \"exampleName\": \"109558771.jpg-Beaches\",\n                    \"expectedLabel\": \"Beaches\",\n                    \"predictedLabel\": \"Beaches\"\n                }\n            ],\n            \"object\": \"learningcurve\"\n        },\n        {\n            \"epoch\": 2,\n            \"metricsData\": {\n                \"f1\": [\n                    0.7499999999999999,\n                    0.8333333333333333\n                ],\n                \"labels\": [\n                    \"Mountains\",\n                    \"Beaches\"\n                ],\n                \"testAccuracy\": 0.8,\n                \"trainingLoss\": 0.0531,\n                \"confusionMatrix\": [\n                    [\n                        3,\n                        2\n                    ],\n                    [\n                        0,\n                        5\n                    ]\n                ],\n                \"trainingAccuracy\": 0.9824\n            },\n            \"epochResults\": [],\n            \"object\": \"learningcurve\"\n        },\n        {\n            \"epoch\": 3,\n            \"metricsData\": {\n                \"f1\": [\n                    0.8000000000000002,\n                    0.8000000000000002\n                ],\n                \"labels\": [\n                    \"Mountains\",\n                    \"Beaches\"\n                ],\n                \"testAccuracy\": 0.8,\n                \"trainingLoss\": 0.0278,\n                \"confusionMatrix\": [\n                    [\n                        4,\n                        1\n                    ],\n                    [\n                        1,\n                        4\n                    ]\n                ],\n                \"trainingAccuracy\": 0.9893\n            },\n            \"epochResults\": [\n                {\n                    \"exampleName\": \"601053842.jpg-Mountains\",\n                    \"expectedLabel\": \"Mountains\",\n                    \"predictedLabel\": \"Mountains\"\n                },\n                {\n                    \"exampleName\": \"578339672.jpg-Mountains\",\n                    \"expectedLabel\": \"Mountains\",\n                    \"predictedLabel\": \"Mountains\"\n                },\n                {\n                    \"exampleName\": \"549525751.jpg-Mountains\",\n                    \"expectedLabel\": \"Mountains\",\n                    \"predictedLabel\": \"Mountains\"\n                },\n                {\n                    \"exampleName\": \"521811667.jpg-Mountains\",\n                    \"expectedLabel\": \"Mountains\",\n                    \"predictedLabel\": \"Beaches\"\n                },\n                {\n                    \"exampleName\": \"479111308.jpg-Mountains\",\n                    \"expectedLabel\": \"Mountains\",\n                    \"predictedLabel\": \"Mountains\"\n                },\n                {\n                    \"exampleName\": \"155304150.jpg-Beaches\",\n                    \"expectedLabel\": \"Beaches\",\n                    \"predictedLabel\": \"Beaches\"\n                },\n                {\n                    \"exampleName\": \"566675649.jpg-Beaches\",\n                    \"expectedLabel\": \"Beaches\",\n                    \"predictedLabel\": \"Beaches\"\n                },\n                {\n                    \"exampleName\": \"175551857.jpg-Beaches\",\n                    \"expectedLabel\": \"Beaches\",\n                    \"predictedLabel\": \"Beaches\"\n                },\n                {\n                    \"exampleName\": \"459120189.jpg-Beaches\",\n                    \"expectedLabel\": \"Beaches\",\n                    \"predictedLabel\": \"Beaches\"\n                },\n                {\n                    \"exampleName\": \"109558771.jpg-Beaches\",\n                    \"expectedLabel\": \"Beaches\",\n                    \"predictedLabel\": \"Mountains\"\n                }\n            ],\n            \"object\": \"learningcurve\"\n        }\n    ]\n}","language":"json","status":200},{"name":"","code":"{\n  \"message\": \"Train job not yet completed successfully\"\n}","language":"json","status":400}]},"settings":"","url":"/vision/models/<MODEL_ID>/lc"},"body":"##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"`data`\",\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-1\": \"array\",\n    \"0-2\": \"Array of `learningcurve` objects.\",\n    \"0-3\": \"1.0\",\n    \"1-3\": \"1.0\",\n    \"1-0\": \"`object`\",\n    \"1-1\": \"string\",\n    \"1-2\": \"Object returned; in this case, `list`.\"\n  },\n  \"cols\": 4,\n  \"rows\": 2\n}\n[/block]\n##LearningCurve Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`epoch`\",\n    \"0-1\": \"int\",\n    \"0-2\": \"Epoch to which the metrics correspond.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`epochResults`\",\n    \"1-1\": \"object\",\n    \"1-2\": \"Prediction results for the set of data used to test the model during training.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`metricsData`\",\n    \"2-1\": \"object\",\n    \"2-2\": \"Model metrics values.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`object`\",\n    \"3-1\": \"string\",\n    \"3-2\": \"Object returned; in this case, `learningcurve`.\",\n    \"3-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 4\n}\n[/block]\n##MetricsData Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`confusionMatrix`\",\n    \"0-1\": \"array\",\n    \"0-2\": \"Array of integers that contains the correct and incorrect classifications for each label in the dataset based on testing done during the training process.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`f1`\",\n    \"1-1\": \"array\",\n    \"1-2\": \"Array of floats that contains the weighted average of precision and recall for each label in the dataset. The corresponding label for each value in this array can be found in the `labels` array. For example, the first f1 score in the `f1` array corresponds to the first label in the `labels` array.\",\n    \"1-3\": \"1.0\",\n    \"3-0\": \"`testAccuracy`\",\n    \"3-1\": \"float\",\n    \"3-2\": \"Accuracy of the test data. From your initial dataset, by default, 10% of the data is set aside and isn't used during training to create the model. This 10% is then sent to the model for prediction. How often the correct prediction is made with this 10% is reported as `testAccuracy`.\",\n    \"3-3\": \"1.0\",\n    \"4-0\": \"`trainingAccuracy`\",\n    \"4-1\": \"float\",\n    \"4-2\": \"Accuracy of the training data. By default, 90% of the data from your dataset is left after the test accuracy set is set aside. This 90% is then sent to the model for prediction. How often the correct prediction is made with this 90% is reported as `trainingAccuracy`.\",\n    \"4-3\": \"1.0\",\n    \"5-0\": \"`trainingLoss`\",\n    \"5-1\": \"float\",\n    \"5-2\": \"Summary of the errors made in predictions using the training and validation data. The lower the number value, the more accurate the model.\",\n    \"5-3\": \"1.0\",\n    \"2-0\": \"`labels`\",\n    \"2-1\": \"array\",\n    \"2-2\": \"Array of strings that contains the dataset labels. These labels correspond to the values in the `f1` array and the `confusionMatrix` array.\",\n    \"2-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 6\n}\n[/block]\n##EpochResults Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-3\": \"1.0\",\n    \"1-3\": \"1.0\",\n    \"2-3\": \"1.0\",\n    \"0-0\": \"`exampleName`\",\n    \"1-0\": \"`expectedLabel`\",\n    \"2-0\": \"`predictedLabel`\",\n    \"1-2\": \"Image label provided when the dataset (used to create the model) was created.\",\n    \"0-2\": \"Example name, followed by a hyphen, and the expected label. For example, `\\\"exampleName\\\": \\\"549525751.jpg-Mountains\\\"`.\",\n    \"0-1\": \"string\",\n    \"1-1\": \"string\",\n    \"2-1\": \"string\",\n    \"2-2\": \"Label that the model predicted for the example\"\n  },\n  \"cols\": 4,\n  \"rows\": 3\n}\n[/block]\nYou can use the learning curve data to see how a model performed in each training iteration. This information helps you tune your model and identify the optimal number of epochs to specify when you train a dataset. \n\nWhen you train a dataset, you can specify the number of epochs used to create the model, or if you don't specify any epochs, the training call selects a number of epochs based on the dataset data. In each epoch, machine learning happens behind the scenes to create the model. Information from an epoch is passed into the next epoch, from that epoch to the next, and so on.\n\nWhen it comes to epochs, more isn't always better. For example, you could train a dataset and specify five epochs. Using the learning curve data for the resulting model, you might see that the most accurate results for that dataset are in the third epoch. You can use the learning curve data to see if your model is overfit (model predicts accurately with training data but not unseen data) or underfit (model doesn't predict accurately with training data or unseen data).\n\nFor information about correlating the confusion matrix values and the labels, see [Get Model Metrics](doc:get-model-metrics).\n\n###Use the Epoch Results to Tune the Model###\n\nUse the `epochResults` values to better understand the `testAccuracy` number the model returns for each epoch. For example, if `testAccuracy` is 1, then each example in the set of test examples was predicted correctly. But if `testAccuracy` is .5, then you can look at the `epochResults` to see for which test data the model returned incorrect predictions.\n\n##Query Parameters##\n\nBy default, this call returns 25 epochs. To page through the epochs for a model, use the `offset` and `count` query parameters.\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Description\",\n    \"h-2\": \"Available Version\",\n    \"0-2\": \"1.0\",\n    \"1-2\": \"1.0\",\n    \"0-0\": \"`count`\",\n    \"1-0\": \"`offset`\",\n    \"0-1\": \"Number of epochs to return. Optional.\",\n    \"1-1\": \"Index of the epoch from which you want to start paging. Optional.\"\n  },\n  \"cols\": 3,\n  \"rows\": 2\n}\n[/block]\nHere's an example of these query parameters. If you omit the `count` parameter, the API returns 25 epochs. If you omit the `offset` parameter, paging starts at 0.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X GET -H \\\"Authorization: BEARER <TOKEN>\\\" -H \\\"Cache-Control: no-cache\\\" https://api.metamind.io/v1/vision/models/7JXCXTRXTMNLJCEF2DR5CJ46QU/lc?offset=30&count=10\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\nFor example, let's say you want to page through all of the model epochs and show 10 at a time. The first call would have `offset=0` and `count=10`, the second call would have `offset=10` and `count=10`, and so on.","category":"57dee8de84019d2000e95af3","createdAt":"2017-03-14T18:10:10.384Z","excerpt":"Returns the metrics for each epoch in a model. These metrics show you the f1 score, accuracy, confusion matrix, test accuracy, and so on for each training iteration performed to create the model.","githubsync":"","hidden":false,"isReference":true,"link_external":false,"link_url":"","next":{"pages":[],"description":""},"order":17,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"get-model-learning-curve","sync_unique":"","title":"Get Model Learning Curve","type":"get","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

getGet Model Learning Curve

Returns the metrics for each epoch in a model. These metrics show you the f1 score, accuracy, confusion matrix, test accuracy, and so on for each training iteration performed to create the model.

##Response Body## [block:parameters] { "data": { "0-0": "`data`", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-1": "array", "0-2": "Array of `learningcurve` objects.", "0-3": "1.0", "1-3": "1.0", "1-0": "`object`", "1-1": "string", "1-2": "Object returned; in this case, `list`." }, "cols": 4, "rows": 2 } [/block] ##LearningCurve Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`epoch`", "0-1": "int", "0-2": "Epoch to which the metrics correspond.", "0-3": "1.0", "1-0": "`epochResults`", "1-1": "object", "1-2": "Prediction results for the set of data used to test the model during training.", "1-3": "1.0", "2-0": "`metricsData`", "2-1": "object", "2-2": "Model metrics values.", "2-3": "1.0", "3-0": "`object`", "3-1": "string", "3-2": "Object returned; in this case, `learningcurve`.", "3-3": "1.0" }, "cols": 4, "rows": 4 } [/block] ##MetricsData Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`confusionMatrix`", "0-1": "array", "0-2": "Array of integers that contains the correct and incorrect classifications for each label in the dataset based on testing done during the training process.", "0-3": "1.0", "1-0": "`f1`", "1-1": "array", "1-2": "Array of floats that contains the weighted average of precision and recall for each label in the dataset. The corresponding label for each value in this array can be found in the `labels` array. For example, the first f1 score in the `f1` array corresponds to the first label in the `labels` array.", "1-3": "1.0", "3-0": "`testAccuracy`", "3-1": "float", "3-2": "Accuracy of the test data. From your initial dataset, by default, 10% of the data is set aside and isn't used during training to create the model. This 10% is then sent to the model for prediction. How often the correct prediction is made with this 10% is reported as `testAccuracy`.", "3-3": "1.0", "4-0": "`trainingAccuracy`", "4-1": "float", "4-2": "Accuracy of the training data. By default, 90% of the data from your dataset is left after the test accuracy set is set aside. This 90% is then sent to the model for prediction. How often the correct prediction is made with this 90% is reported as `trainingAccuracy`.", "4-3": "1.0", "5-0": "`trainingLoss`", "5-1": "float", "5-2": "Summary of the errors made in predictions using the training and validation data. The lower the number value, the more accurate the model.", "5-3": "1.0", "2-0": "`labels`", "2-1": "array", "2-2": "Array of strings that contains the dataset labels. These labels correspond to the values in the `f1` array and the `confusionMatrix` array.", "2-3": "1.0" }, "cols": 4, "rows": 6 } [/block] ##EpochResults Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-3": "1.0", "1-3": "1.0", "2-3": "1.0", "0-0": "`exampleName`", "1-0": "`expectedLabel`", "2-0": "`predictedLabel`", "1-2": "Image label provided when the dataset (used to create the model) was created.", "0-2": "Example name, followed by a hyphen, and the expected label. For example, `\"exampleName\": \"549525751.jpg-Mountains\"`.", "0-1": "string", "1-1": "string", "2-1": "string", "2-2": "Label that the model predicted for the example" }, "cols": 4, "rows": 3 } [/block] You can use the learning curve data to see how a model performed in each training iteration. This information helps you tune your model and identify the optimal number of epochs to specify when you train a dataset. When you train a dataset, you can specify the number of epochs used to create the model, or if you don't specify any epochs, the training call selects a number of epochs based on the dataset data. In each epoch, machine learning happens behind the scenes to create the model. Information from an epoch is passed into the next epoch, from that epoch to the next, and so on. When it comes to epochs, more isn't always better. For example, you could train a dataset and specify five epochs. Using the learning curve data for the resulting model, you might see that the most accurate results for that dataset are in the third epoch. You can use the learning curve data to see if your model is overfit (model predicts accurately with training data but not unseen data) or underfit (model doesn't predict accurately with training data or unseen data). For information about correlating the confusion matrix values and the labels, see [Get Model Metrics](doc:get-model-metrics). ###Use the Epoch Results to Tune the Model### Use the `epochResults` values to better understand the `testAccuracy` number the model returns for each epoch. For example, if `testAccuracy` is 1, then each example in the set of test examples was predicted correctly. But if `testAccuracy` is .5, then you can look at the `epochResults` to see for which test data the model returned incorrect predictions. ##Query Parameters## By default, this call returns 25 epochs. To page through the epochs for a model, use the `offset` and `count` query parameters. [block:parameters] { "data": { "h-0": "Name", "h-1": "Description", "h-2": "Available Version", "0-2": "1.0", "1-2": "1.0", "0-0": "`count`", "1-0": "`offset`", "0-1": "Number of epochs to return. Optional.", "1-1": "Index of the epoch from which you want to start paging. Optional." }, "cols": 3, "rows": 2 } [/block] Here's an example of these query parameters. If you omit the `count` parameter, the API returns 25 epochs. If you omit the `offset` parameter, paging starts at 0. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: BEARER <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/models/7JXCXTRXTMNLJCEF2DR5CJ46QU/lc?offset=30&count=10", "language": "curl" } ] } [/block] For example, let's say you want to page through all of the model epochs and show 10 at a time. The first call would have `offset=0` and `count=10`, the second call would have `offset=10` and `count=10`, and so on.

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



##Response Body## [block:parameters] { "data": { "0-0": "`data`", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-1": "array", "0-2": "Array of `learningcurve` objects.", "0-3": "1.0", "1-3": "1.0", "1-0": "`object`", "1-1": "string", "1-2": "Object returned; in this case, `list`." }, "cols": 4, "rows": 2 } [/block] ##LearningCurve Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`epoch`", "0-1": "int", "0-2": "Epoch to which the metrics correspond.", "0-3": "1.0", "1-0": "`epochResults`", "1-1": "object", "1-2": "Prediction results for the set of data used to test the model during training.", "1-3": "1.0", "2-0": "`metricsData`", "2-1": "object", "2-2": "Model metrics values.", "2-3": "1.0", "3-0": "`object`", "3-1": "string", "3-2": "Object returned; in this case, `learningcurve`.", "3-3": "1.0" }, "cols": 4, "rows": 4 } [/block] ##MetricsData Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`confusionMatrix`", "0-1": "array", "0-2": "Array of integers that contains the correct and incorrect classifications for each label in the dataset based on testing done during the training process.", "0-3": "1.0", "1-0": "`f1`", "1-1": "array", "1-2": "Array of floats that contains the weighted average of precision and recall for each label in the dataset. The corresponding label for each value in this array can be found in the `labels` array. For example, the first f1 score in the `f1` array corresponds to the first label in the `labels` array.", "1-3": "1.0", "3-0": "`testAccuracy`", "3-1": "float", "3-2": "Accuracy of the test data. From your initial dataset, by default, 10% of the data is set aside and isn't used during training to create the model. This 10% is then sent to the model for prediction. How often the correct prediction is made with this 10% is reported as `testAccuracy`.", "3-3": "1.0", "4-0": "`trainingAccuracy`", "4-1": "float", "4-2": "Accuracy of the training data. By default, 90% of the data from your dataset is left after the test accuracy set is set aside. This 90% is then sent to the model for prediction. How often the correct prediction is made with this 90% is reported as `trainingAccuracy`.", "4-3": "1.0", "5-0": "`trainingLoss`", "5-1": "float", "5-2": "Summary of the errors made in predictions using the training and validation data. The lower the number value, the more accurate the model.", "5-3": "1.0", "2-0": "`labels`", "2-1": "array", "2-2": "Array of strings that contains the dataset labels. These labels correspond to the values in the `f1` array and the `confusionMatrix` array.", "2-3": "1.0" }, "cols": 4, "rows": 6 } [/block] ##EpochResults Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-3": "1.0", "1-3": "1.0", "2-3": "1.0", "0-0": "`exampleName`", "1-0": "`expectedLabel`", "2-0": "`predictedLabel`", "1-2": "Image label provided when the dataset (used to create the model) was created.", "0-2": "Example name, followed by a hyphen, and the expected label. For example, `\"exampleName\": \"549525751.jpg-Mountains\"`.", "0-1": "string", "1-1": "string", "2-1": "string", "2-2": "Label that the model predicted for the example" }, "cols": 4, "rows": 3 } [/block] You can use the learning curve data to see how a model performed in each training iteration. This information helps you tune your model and identify the optimal number of epochs to specify when you train a dataset. When you train a dataset, you can specify the number of epochs used to create the model, or if you don't specify any epochs, the training call selects a number of epochs based on the dataset data. In each epoch, machine learning happens behind the scenes to create the model. Information from an epoch is passed into the next epoch, from that epoch to the next, and so on. When it comes to epochs, more isn't always better. For example, you could train a dataset and specify five epochs. Using the learning curve data for the resulting model, you might see that the most accurate results for that dataset are in the third epoch. You can use the learning curve data to see if your model is overfit (model predicts accurately with training data but not unseen data) or underfit (model doesn't predict accurately with training data or unseen data). For information about correlating the confusion matrix values and the labels, see [Get Model Metrics](doc:get-model-metrics). ###Use the Epoch Results to Tune the Model### Use the `epochResults` values to better understand the `testAccuracy` number the model returns for each epoch. For example, if `testAccuracy` is 1, then each example in the set of test examples was predicted correctly. But if `testAccuracy` is .5, then you can look at the `epochResults` to see for which test data the model returned incorrect predictions. ##Query Parameters## By default, this call returns 25 epochs. To page through the epochs for a model, use the `offset` and `count` query parameters. [block:parameters] { "data": { "h-0": "Name", "h-1": "Description", "h-2": "Available Version", "0-2": "1.0", "1-2": "1.0", "0-0": "`count`", "1-0": "`offset`", "0-1": "Number of epochs to return. Optional.", "1-1": "Index of the epoch from which you want to start paging. Optional." }, "cols": 3, "rows": 2 } [/block] Here's an example of these query parameters. If you omit the `count` parameter, the API returns 25 epochs. If you omit the `offset` parameter, paging starts at 0. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: BEARER <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/models/7JXCXTRXTMNLJCEF2DR5CJ46QU/lc?offset=30&count=10", "language": "curl" } ] } [/block] For example, let's say you want to page through all of the model epochs and show 10 at a time. The first call would have `offset=0` and `count=10`, the second call would have `offset=10` and `count=10`, and so on.
{"__v":1,"_id":"57e71d00f000280e006eff7c","api":{"auth":"required","examples":{"codes":[{"language":"curl","code":"curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/vision/datasets/57/models"}]},"method":"get","params":[],"results":{"codes":[{"name":"","code":"{\n  \"object\": \"list\",\n  \"data\": [\n    {\n      \"datasetId\": 57,\n      \"datasetVersionId\": 0,\n      \"name\": \"Beach Mountain Model - Test1\",\n      \"status\": \"FAILED\",\n      \"progress\": 0,\n      \"createdAt\": \"2016-09-15T15:31:23.000+0000\",\n      \"updatedAt\": \"2016-09-15T15:32:53.000+0000\",\n      \"failureMsg\": \"To train a dataset and create a model, the dataset must contain at least 100 examples per label for test set\",\n      \"object\": \"model\",\n      \"modelId\": \"2KXJEOM3N562JBT4P7OX7VID2Q\",\n      \"modelType\": \"image\"\n    },\n    {\n      \"datasetId\": 57,\n      \"datasetVersionId\": 0,\n      \"name\": \"Beach Mountain Model - Test2\",\n      \"status\": \"SUCCEEDED\",\n      \"progress\": 1,\n      \"createdAt\": \"2016-09-15T16:15:46.000+0000\",\n      \"updatedAt\": \"2016-09-15T16:17:19.000+0000\",\n      \"object\": \"model\",\n      \"modelId\": \"YCQ4ZACEPJFGXZNRA6ERF3GL5E\",\n      \"modelType\": \"image\"\n    }\n  ]\n}","language":"json","status":200},{"name":"","code":"{}","language":"json","status":400}]},"settings":"","url":"/vision/datasets/<DATASET_ID>/models"},"body":"##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`data`\",\n    \"0-1\": \"array\",\n    \"0-2\": \"Array of `model` objects. If the dataset has no models, the array is empty.\",\n    \"0-3\": \"1.0\",\n    \"1-2\": \"Object returned; in this case, `list`.\",\n    \"1-0\": \"`object`\",\n    \"1-1\": \"string\",\n    \"1-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 2\n}\n[/block]\n##Training Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"1-0\": \"`datasetId`\",\n    \"1-1\": \"long\",\n    \"1-2\": \"ID of the dataset trained to create the model.\",\n    \"1-3\": \"1.0\",\n    \"6-0\": \"`name`\",\n    \"6-1\": \"string\",\n    \"6-2\": \"Name of the model.\",\n    \"6-3\": \"1.0\",\n    \"7-0\": \"`object`\",\n    \"7-1\": \"string\",\n    \"7-2\": \"Object returned; in this case, `model`.\",\n    \"7-3\": \"1.0\",\n    \"0-0\": \"`createdAt`\",\n    \"0-1\": \"date\",\n    \"0-2\": \"Date and time that the model was created.\",\n    \"0-3\": \"1.0\",\n    \"4-0\": \"`modelId`\",\n    \"4-1\": \"string\",\n    \"4-2\": \"ID of the model. Contains letters and numbers.\",\n    \"4-3\": \"1.0\",\n    \"9-0\": \"`status`\",\n    \"9-1\": \"string\",\n    \"9-2\": \"Status of the model. Valid values are:\\n- `QUEUED`—The training job is in the queue.\\n- `RUNNING`—The training job is running.\\n- `SUCCEEDED`—The training job succeeded, and you can use the model.\\n- `FAILED`—The training job failed.\",\n    \"10-0\": \"`updatedAt`\",\n    \"10-1\": \"date\",\n    \"10-2\": \"Date and time that the model was last updated.\",\n    \"9-3\": \"1.0\",\n    \"10-3\": \"1.0\",\n    \"3-0\": \"`failureMsg`\",\n    \"3-1\": \"string\",\n    \"3-2\": \"Reason the dataset training failed. Returned only if the training status is `FAILED`.\",\n    \"3-3\": \"1.0\",\n    \"8-0\": \"`progress`\",\n    \"8-1\": \"int\",\n    \"8-2\": \"How far the dataset training has progressed. Values are between 0–1.\",\n    \"8-3\": \"1.0\",\n    \"2-0\": \"`datasetVersionId`\",\n    \"2-1\": \"int\",\n    \"2-2\": \"N/A\",\n    \"2-3\": \"1.0\",\n    \"5-0\": \"`modelType`\",\n    \"5-1\": \"string\",\n    \"5-3\": \"1.0\",\n    \"5-2\": \"Type of data from which the model was created. Default is `image`.\"\n  },\n  \"cols\": 4,\n  \"rows\": 11\n}\n[/block]","category":"57dee8de84019d2000e95af3","createdAt":"2016-09-25T00:40:32.207Z","excerpt":"Returns all models for the specified dataset.","githubsync":"","hidden":false,"isReference":true,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":18,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"get-all-models","sync_unique":"","title":"Get All Models","type":"get","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

getGet All Models

Returns all models for the specified dataset.

##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`data`", "0-1": "array", "0-2": "Array of `model` objects. If the dataset has no models, the array is empty.", "0-3": "1.0", "1-2": "Object returned; in this case, `list`.", "1-0": "`object`", "1-1": "string", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block] ##Training Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`datasetId`", "1-1": "long", "1-2": "ID of the dataset trained to create the model.", "1-3": "1.0", "6-0": "`name`", "6-1": "string", "6-2": "Name of the model.", "6-3": "1.0", "7-0": "`object`", "7-1": "string", "7-2": "Object returned; in this case, `model`.", "7-3": "1.0", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the model was created.", "0-3": "1.0", "4-0": "`modelId`", "4-1": "string", "4-2": "ID of the model. Contains letters and numbers.", "4-3": "1.0", "9-0": "`status`", "9-1": "string", "9-2": "Status of the model. Valid values are:\n- `QUEUED`—The training job is in the queue.\n- `RUNNING`—The training job is running.\n- `SUCCEEDED`—The training job succeeded, and you can use the model.\n- `FAILED`—The training job failed.", "10-0": "`updatedAt`", "10-1": "date", "10-2": "Date and time that the model was last updated.", "9-3": "1.0", "10-3": "1.0", "3-0": "`failureMsg`", "3-1": "string", "3-2": "Reason the dataset training failed. Returned only if the training status is `FAILED`.", "3-3": "1.0", "8-0": "`progress`", "8-1": "int", "8-2": "How far the dataset training has progressed. Values are between 0–1.", "8-3": "1.0", "2-0": "`datasetVersionId`", "2-1": "int", "2-2": "N/A", "2-3": "1.0", "5-0": "`modelType`", "5-1": "string", "5-3": "1.0", "5-2": "Type of data from which the model was created. Default is `image`." }, "cols": 4, "rows": 11 } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`data`", "0-1": "array", "0-2": "Array of `model` objects. If the dataset has no models, the array is empty.", "0-3": "1.0", "1-2": "Object returned; in this case, `list`.", "1-0": "`object`", "1-1": "string", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block] ##Training Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`datasetId`", "1-1": "long", "1-2": "ID of the dataset trained to create the model.", "1-3": "1.0", "6-0": "`name`", "6-1": "string", "6-2": "Name of the model.", "6-3": "1.0", "7-0": "`object`", "7-1": "string", "7-2": "Object returned; in this case, `model`.", "7-3": "1.0", "0-0": "`createdAt`", "0-1": "date", "0-2": "Date and time that the model was created.", "0-3": "1.0", "4-0": "`modelId`", "4-1": "string", "4-2": "ID of the model. Contains letters and numbers.", "4-3": "1.0", "9-0": "`status`", "9-1": "string", "9-2": "Status of the model. Valid values are:\n- `QUEUED`—The training job is in the queue.\n- `RUNNING`—The training job is running.\n- `SUCCEEDED`—The training job succeeded, and you can use the model.\n- `FAILED`—The training job failed.", "10-0": "`updatedAt`", "10-1": "date", "10-2": "Date and time that the model was last updated.", "9-3": "1.0", "10-3": "1.0", "3-0": "`failureMsg`", "3-1": "string", "3-2": "Reason the dataset training failed. Returned only if the training status is `FAILED`.", "3-3": "1.0", "8-0": "`progress`", "8-1": "int", "8-2": "How far the dataset training has progressed. Values are between 0–1.", "8-3": "1.0", "2-0": "`datasetVersionId`", "2-1": "int", "2-2": "N/A", "2-3": "1.0", "5-0": "`modelType`", "5-1": "string", "5-3": "1.0", "5-2": "Type of data from which the model was created. Default is `image`." }, "cols": 4, "rows": 11 } [/block]
{"__v":1,"_id":"57e71e345f33650e00763877","api":{"auth":"required","examples":{"codes":[{"language":"curl","code":"curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleBase64Content=/9j/4AAQSkZ...\" -F \"modelId=YCQ4ZACEPJFGXZNRA6ERF3GL5E\" https://api.metamind.io/v1/vision/predict"}]},"method":"post","params":[],"results":{"codes":[{"name":"","code":"{\n  \"probabilities\": [\n    {\n      \"label\": \"beach\",\n      \"probability\": 0.9602110385894775\n    },\n    {\n      \"label\": \"mountain\",\n      \"probability\": 0.039788953959941864\n    }\n  ],\n  \"object\": \"predictresponse\"\n}","language":"json","status":200},{"name":"","code":"{}","language":"json","status":400}]},"settings":"","url":"/vision/predict"},"body":"##Request Parameters##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"`modelId`\",\n    \"1-0\": \"`sampleBase64Content`\",\n    \"0-1\": \"string\",\n    \"1-1\": \"string\",\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-2\": \"ID of the model that makes the prediction.\",\n    \"0-3\": \"1.0\",\n    \"1-2\": \"The image contained in a base64 string.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`sampleId`\",\n    \"2-1\": \"string\",\n    \"2-2\": \"Optional. String that you can pass in to tag the prediction. Can be any value, and is returned in the response.\",\n    \"2-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 3\n}\n[/block]\nKeep the following points in mind when sending an image in for prediction:\n- The maximum image file size you can pass to this resource is 1 MB. \n- The supported image file types are PNG, JPG, and JPEG.\n\n##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`message`\",\n    \"0-1\": \"string\",\n    \"0-2\": \"Error message. Returned only if the status is something other than successful (200).\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`object`\",\n    \"1-1\": \"string\",\n    \"1-2\": \"Object returned; in this case, `predictresponse`.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`probabilities`\",\n    \"2-1\": \"array\",\n    \"2-2\": \"Probabilities for the prediction.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`sampleId`\",\n    \"3-1\": \"string\",\n    \"3-2\": \"Value passed in when the prediction call was made. Returned only if the parameter is provided.\",\n    \"3-3\": \"1.0\",\n    \"4-0\": \"`status`\",\n    \"4-1\": \"string\",\n    \"4-2\": \"Status of the prediction. Status of 200 means the prediction was successful.\",\n    \"4-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 5\n}\n[/block]\n##Probabilities Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`label`\",\n    \"0-1\": \"string\",\n    \"0-2\": \"Probability label for the input.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`probability`\",\n    \"1-1\": \"float\",\n    \"1-2\": \"Probability value for the input. Values are between 0–1.\",\n    \"1-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 2\n}\n[/block]\n###Rate Limit Headers##\n\nAny time you make an API call to the `/predict` resource, your rate limit information is returned in the header. The rate limit headers specify your prediction usage for the current calendar month only.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"X-RateLimit-Limit 1000\\nX-RateLimit-Remaining 997\\nX-RateLimit-Reset 2017-04-01 19:31:42.0\",\n      \"language\": \"text\"\n    }\n  ]\n}\n[/block]\n\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Header\",\n    \"h-1\": \"Description\",\n    \"h-2\": \"Example\",\n    \"0-0\": \"`X-RateLimit-Limit`\",\n    \"0-1\": \"Maximum number of prediction calls available for the current plan month.\",\n    \"0-2\": \"1000\",\n    \"1-0\": \"`X-RateLimit-Remaining`\",\n    \"1-1\": \"Total number of prediction calls you have left for the current plan month.\",\n    \"1-2\": \"997\",\n    \"2-0\": \"`X-RateLimit-Reset`\",\n    \"2-1\": \"Date on which your predictions are next provisioned. Always the first of the month.\",\n    \"2-2\": \"2017-04-01 22:07:40.0\"\n  },\n  \"cols\": 3,\n  \"rows\": 3\n}\n[/block]","category":"57dee8de84019d2000e95af3","createdAt":"2016-09-25T00:45:40.004Z","excerpt":"Returns a prediction for the specified image converted into a base64 string.","githubsync":"","hidden":false,"isReference":true,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":19,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"prediction-with-image-base64-string","sync_unique":"","title":"Prediction with Image Base64 String","type":"post","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

postPrediction with Image Base64 String

Returns a prediction for the specified image converted into a base64 string.

##Request Parameters## [block:parameters] { "data": { "0-0": "`modelId`", "1-0": "`sampleBase64Content`", "0-1": "string", "1-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "ID of the model that makes the prediction.", "0-3": "1.0", "1-2": "The image contained in a base64 string.", "1-3": "1.0", "2-0": "`sampleId`", "2-1": "string", "2-2": "Optional. String that you can pass in to tag the prediction. Can be any value, and is returned in the response.", "2-3": "1.0" }, "cols": 4, "rows": 3 } [/block] Keep the following points in mind when sending an image in for prediction: - The maximum image file size you can pass to this resource is 1 MB. - The supported image file types are PNG, JPG, and JPEG. ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`message`", "0-1": "string", "0-2": "Error message. Returned only if the status is something other than successful (200).", "0-3": "1.0", "1-0": "`object`", "1-1": "string", "1-2": "Object returned; in this case, `predictresponse`.", "1-3": "1.0", "2-0": "`probabilities`", "2-1": "array", "2-2": "Probabilities for the prediction.", "2-3": "1.0", "3-0": "`sampleId`", "3-1": "string", "3-2": "Value passed in when the prediction call was made. Returned only if the parameter is provided.", "3-3": "1.0", "4-0": "`status`", "4-1": "string", "4-2": "Status of the prediction. Status of 200 means the prediction was successful.", "4-3": "1.0" }, "cols": 4, "rows": 5 } [/block] ##Probabilities Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`label`", "0-1": "string", "0-2": "Probability label for the input.", "0-3": "1.0", "1-0": "`probability`", "1-1": "float", "1-2": "Probability value for the input. Values are between 0–1.", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block] ###Rate Limit Headers## Any time you make an API call to the `/predict` resource, your rate limit information is returned in the header. The rate limit headers specify your prediction usage for the current calendar month only. [block:code] { "codes": [ { "code": "X-RateLimit-Limit 1000\nX-RateLimit-Remaining 997\nX-RateLimit-Reset 2017-04-01 19:31:42.0", "language": "text" } ] } [/block] [block:parameters] { "data": { "h-0": "Header", "h-1": "Description", "h-2": "Example", "0-0": "`X-RateLimit-Limit`", "0-1": "Maximum number of prediction calls available for the current plan month.", "0-2": "1000", "1-0": "`X-RateLimit-Remaining`", "1-1": "Total number of prediction calls you have left for the current plan month.", "1-2": "997", "2-0": "`X-RateLimit-Reset`", "2-1": "Date on which your predictions are next provisioned. Always the first of the month.", "2-2": "2017-04-01 22:07:40.0" }, "cols": 3, "rows": 3 } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



##Request Parameters## [block:parameters] { "data": { "0-0": "`modelId`", "1-0": "`sampleBase64Content`", "0-1": "string", "1-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "ID of the model that makes the prediction.", "0-3": "1.0", "1-2": "The image contained in a base64 string.", "1-3": "1.0", "2-0": "`sampleId`", "2-1": "string", "2-2": "Optional. String that you can pass in to tag the prediction. Can be any value, and is returned in the response.", "2-3": "1.0" }, "cols": 4, "rows": 3 } [/block] Keep the following points in mind when sending an image in for prediction: - The maximum image file size you can pass to this resource is 1 MB. - The supported image file types are PNG, JPG, and JPEG. ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`message`", "0-1": "string", "0-2": "Error message. Returned only if the status is something other than successful (200).", "0-3": "1.0", "1-0": "`object`", "1-1": "string", "1-2": "Object returned; in this case, `predictresponse`.", "1-3": "1.0", "2-0": "`probabilities`", "2-1": "array", "2-2": "Probabilities for the prediction.", "2-3": "1.0", "3-0": "`sampleId`", "3-1": "string", "3-2": "Value passed in when the prediction call was made. Returned only if the parameter is provided.", "3-3": "1.0", "4-0": "`status`", "4-1": "string", "4-2": "Status of the prediction. Status of 200 means the prediction was successful.", "4-3": "1.0" }, "cols": 4, "rows": 5 } [/block] ##Probabilities Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`label`", "0-1": "string", "0-2": "Probability label for the input.", "0-3": "1.0", "1-0": "`probability`", "1-1": "float", "1-2": "Probability value for the input. Values are between 0–1.", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block] ###Rate Limit Headers## Any time you make an API call to the `/predict` resource, your rate limit information is returned in the header. The rate limit headers specify your prediction usage for the current calendar month only. [block:code] { "codes": [ { "code": "X-RateLimit-Limit 1000\nX-RateLimit-Remaining 997\nX-RateLimit-Reset 2017-04-01 19:31:42.0", "language": "text" } ] } [/block] [block:parameters] { "data": { "h-0": "Header", "h-1": "Description", "h-2": "Example", "0-0": "`X-RateLimit-Limit`", "0-1": "Maximum number of prediction calls available for the current plan month.", "0-2": "1000", "1-0": "`X-RateLimit-Remaining`", "1-1": "Total number of prediction calls you have left for the current plan month.", "1-2": "997", "2-0": "`X-RateLimit-Reset`", "2-1": "Date on which your predictions are next provisioned. Always the first of the month.", "2-2": "2017-04-01 22:07:40.0" }, "cols": 3, "rows": 3 } [/block]
{"__v":2,"_id":"57e71fc54207050e004c3baa","api":{"auth":"required","examples":{"codes":[{"language":"curl","code":"curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleId=Photo Prediction\"  -F \"sampleContent=@/FileToPredict/our_trip_to_the_beach.jpg\" -F \"modelId=YCQ4ZACEPJFGXZNRA6ERF3GL5E\" https://api.metamind.io/v1/vision/predict"}]},"method":"post","params":[],"results":{"codes":[{"name":"","code":"{\n  \"probabilities\": [\n    {\n      \"label\": \"beach\",\n      \"probability\": 0.980938732624054\n    },\n    {\n      \"label\": \"mountain\",\n      \"probability\": 0.0190612580627203\n    }\n  ],\n  \"object\": \"predictresponse\"\n}","language":"json","status":200},{"name":"","code":"{}","language":"json","status":400}]},"settings":"","url":"/vision/predict"},"body":"##Request Parameters##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"`modelId`\",\n    \"1-0\": \"`sampleContent`\",\n    \"0-1\": \"string\",\n    \"1-1\": \"string\",\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-2\": \"ID of the model that makes the prediction.\",\n    \"0-3\": \"1.0\",\n    \"1-2\": \"File system location of the image file to upload\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`sampleId`\",\n    \"2-1\": \"string\",\n    \"2-2\": \"Optional. String that you can pass in to tag the prediction.\",\n    \"2-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 3\n}\n[/block]\nKeep the following points in mind when sending an image in for prediction:\n- The maximum image file size you can pass to this resource is 1 MB. \n- The supported image file types are PNG, JPG, and JPEG.\n\n##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`message`\",\n    \"0-1\": \"string\",\n    \"0-2\": \"Error message. Returned only if the status is something other than successful (200).\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`object`\",\n    \"1-1\": \"string\",\n    \"1-2\": \"Object returned; in this case, `predictresponse`.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`probabilities`\",\n    \"2-1\": \"array\",\n    \"2-2\": \"Probabilities for the prediction.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`sampleId`\",\n    \"3-1\": \"string\",\n    \"3-2\": \"Value passed in when the prediction call was made. Returned only if the parameter is provided.\",\n    \"3-3\": \"1.0\",\n    \"4-0\": \"`status`\",\n    \"4-1\": \"string\",\n    \"4-2\": \"Status of the prediction. Status of 200 means the prediction was successful.\",\n    \"4-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 5\n}\n[/block]\n##Probabilities Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`label`\",\n    \"0-1\": \"string\",\n    \"0-2\": \"Probability label for the input.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`probability`\",\n    \"1-1\": \"float\",\n    \"1-2\": \"Probability value for the input. Values are between 0–1.\",\n    \"1-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 2\n}\n[/block]\n###Rate Limit Headers##\n\nAny time you make an API call to the `/predict` resource, your rate limit information is returned in the header. The rate limit headers specify your prediction usage for the current calendar month only.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"X-RateLimit-Limit 1000\\nX-RateLimit-Remaining 997\\nX-RateLimit-Reset 2017-04-01 19:31:42.0\",\n      \"language\": \"text\"\n    }\n  ]\n}\n[/block]\n\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Header\",\n    \"h-1\": \"Description\",\n    \"h-2\": \"Example\",\n    \"0-0\": \"`X-RateLimit-Limit`\",\n    \"0-1\": \"Maximum number of prediction calls available for the current plan month.\",\n    \"0-2\": \"1000\",\n    \"1-0\": \"`X-RateLimit-Remaining`\",\n    \"1-1\": \"Total number of prediction calls you have left for the current plan month.\",\n    \"1-2\": \"997\",\n    \"2-0\": \"`X-RateLimit-Reset`\",\n    \"2-1\": \"Date on which your predictions are next provisioned. Always the first of the month.\",\n    \"2-2\": \"2017-04-01 22:07:40.0\"\n  },\n  \"cols\": 3,\n  \"rows\": 3\n}\n[/block]","category":"57dee8de84019d2000e95af3","createdAt":"2016-09-25T00:52:21.290Z","excerpt":"Returns a prediction for the specified local image file.","githubsync":"","hidden":false,"isReference":true,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":20,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"prediction-with-image-file","sync_unique":"","title":"Prediction with Image File","type":"post","updates":["589baec16113c325002f4d8a"],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

postPrediction with Image File

Returns a prediction for the specified local image file.

##Request Parameters## [block:parameters] { "data": { "0-0": "`modelId`", "1-0": "`sampleContent`", "0-1": "string", "1-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "ID of the model that makes the prediction.", "0-3": "1.0", "1-2": "File system location of the image file to upload", "1-3": "1.0", "2-0": "`sampleId`", "2-1": "string", "2-2": "Optional. String that you can pass in to tag the prediction.", "2-3": "1.0" }, "cols": 4, "rows": 3 } [/block] Keep the following points in mind when sending an image in for prediction: - The maximum image file size you can pass to this resource is 1 MB. - The supported image file types are PNG, JPG, and JPEG. ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`message`", "0-1": "string", "0-2": "Error message. Returned only if the status is something other than successful (200).", "0-3": "1.0", "1-0": "`object`", "1-1": "string", "1-2": "Object returned; in this case, `predictresponse`.", "1-3": "1.0", "2-0": "`probabilities`", "2-1": "array", "2-2": "Probabilities for the prediction.", "2-3": "1.0", "3-0": "`sampleId`", "3-1": "string", "3-2": "Value passed in when the prediction call was made. Returned only if the parameter is provided.", "3-3": "1.0", "4-0": "`status`", "4-1": "string", "4-2": "Status of the prediction. Status of 200 means the prediction was successful.", "4-3": "1.0" }, "cols": 4, "rows": 5 } [/block] ##Probabilities Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`label`", "0-1": "string", "0-2": "Probability label for the input.", "0-3": "1.0", "1-0": "`probability`", "1-1": "float", "1-2": "Probability value for the input. Values are between 0–1.", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block] ###Rate Limit Headers## Any time you make an API call to the `/predict` resource, your rate limit information is returned in the header. The rate limit headers specify your prediction usage for the current calendar month only. [block:code] { "codes": [ { "code": "X-RateLimit-Limit 1000\nX-RateLimit-Remaining 997\nX-RateLimit-Reset 2017-04-01 19:31:42.0", "language": "text" } ] } [/block] [block:parameters] { "data": { "h-0": "Header", "h-1": "Description", "h-2": "Example", "0-0": "`X-RateLimit-Limit`", "0-1": "Maximum number of prediction calls available for the current plan month.", "0-2": "1000", "1-0": "`X-RateLimit-Remaining`", "1-1": "Total number of prediction calls you have left for the current plan month.", "1-2": "997", "2-0": "`X-RateLimit-Reset`", "2-1": "Date on which your predictions are next provisioned. Always the first of the month.", "2-2": "2017-04-01 22:07:40.0" }, "cols": 3, "rows": 3 } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



##Request Parameters## [block:parameters] { "data": { "0-0": "`modelId`", "1-0": "`sampleContent`", "0-1": "string", "1-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "ID of the model that makes the prediction.", "0-3": "1.0", "1-2": "File system location of the image file to upload", "1-3": "1.0", "2-0": "`sampleId`", "2-1": "string", "2-2": "Optional. String that you can pass in to tag the prediction.", "2-3": "1.0" }, "cols": 4, "rows": 3 } [/block] Keep the following points in mind when sending an image in for prediction: - The maximum image file size you can pass to this resource is 1 MB. - The supported image file types are PNG, JPG, and JPEG. ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`message`", "0-1": "string", "0-2": "Error message. Returned only if the status is something other than successful (200).", "0-3": "1.0", "1-0": "`object`", "1-1": "string", "1-2": "Object returned; in this case, `predictresponse`.", "1-3": "1.0", "2-0": "`probabilities`", "2-1": "array", "2-2": "Probabilities for the prediction.", "2-3": "1.0", "3-0": "`sampleId`", "3-1": "string", "3-2": "Value passed in when the prediction call was made. Returned only if the parameter is provided.", "3-3": "1.0", "4-0": "`status`", "4-1": "string", "4-2": "Status of the prediction. Status of 200 means the prediction was successful.", "4-3": "1.0" }, "cols": 4, "rows": 5 } [/block] ##Probabilities Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`label`", "0-1": "string", "0-2": "Probability label for the input.", "0-3": "1.0", "1-0": "`probability`", "1-1": "float", "1-2": "Probability value for the input. Values are between 0–1.", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block] ###Rate Limit Headers## Any time you make an API call to the `/predict` resource, your rate limit information is returned in the header. The rate limit headers specify your prediction usage for the current calendar month only. [block:code] { "codes": [ { "code": "X-RateLimit-Limit 1000\nX-RateLimit-Remaining 997\nX-RateLimit-Reset 2017-04-01 19:31:42.0", "language": "text" } ] } [/block] [block:parameters] { "data": { "h-0": "Header", "h-1": "Description", "h-2": "Example", "0-0": "`X-RateLimit-Limit`", "0-1": "Maximum number of prediction calls available for the current plan month.", "0-2": "1000", "1-0": "`X-RateLimit-Remaining`", "1-1": "Total number of prediction calls you have left for the current plan month.", "1-2": "997", "2-0": "`X-RateLimit-Reset`", "2-1": "Date on which your predictions are next provisioned. Always the first of the month.", "2-2": "2017-04-01 22:07:40.0" }, "cols": 3, "rows": 3 } [/block]
{"__v":1,"_id":"57e72183f000280e006eff7d","api":{"auth":"required","examples":{"codes":[{"language":"curl","code":"curl -X POST -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" -H \"Content-Type: multipart/form-data\" -F \"sampleLocation=http://www.mysite.com/our_beach_vacation.jpg\" -F \"modelId=YCQ4ZACEPJFGXZNRA6ERF3GL5E\" https://api.metamind.io/v1/vision/predict"}]},"method":"post","params":[],"results":{"codes":[{"name":"","code":"{\n  \"probabilities\": [\n    {\n      \"label\": \"beach\",\n      \"probability\": 0.9997345805168152\n    },\n    {\n      \"label\": \"mountain\",\n      \"probability\": 0.0002654256531968713\n    }\n  ],\n  \"object\": \"predictresponse\"\n}","language":"json","status":200},{"name":"","code":"{}","language":"json","status":400}]},"settings":"","url":"/vision/predict"},"body":"##Request Parameters##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"`modelId`\",\n    \"0-1\": \"string\",\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-2\": \"ID of the model that makes the prediction.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`sampleId`\",\n    \"1-1\": \"string\",\n    \"1-2\": \"Optional. String that you can pass in to tag the prediction.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`sampleLocation`\",\n    \"2-1\": \"string\",\n    \"2-2\": \"URL of the image file.\",\n    \"2-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 3\n}\n[/block]\nKeep the following points in mind when sending an image in for prediction:\n- The maximum image file size you can pass to this resource is 1 MB. \n- The supported image file types are PNG, JPG, and JPEG.\n\n##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`message`\",\n    \"0-1\": \"string\",\n    \"0-2\": \"Error message. Returned only if the status is something other than successful (200).\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`object`\",\n    \"1-1\": \"string\",\n    \"1-2\": \"Object returned; in this case, `predictresponse`.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`probabilities`\",\n    \"2-1\": \"array\",\n    \"2-2\": \"Probabilities for the prediction.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`sampleId`\",\n    \"3-1\": \"string\",\n    \"3-2\": \"Value passed in when the prediction call was made. Returned only if the parameter is provided.\",\n    \"3-3\": \"1.0\",\n    \"4-0\": \"`status`\",\n    \"4-1\": \"string\",\n    \"4-2\": \"Status of the prediction. Status of 200 means the prediction was successful.\",\n    \"4-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 5\n}\n[/block]\n##Probabilities Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`label`\",\n    \"0-1\": \"string\",\n    \"0-2\": \"Probability label for the input.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`probability`\",\n    \"1-1\": \"float\",\n    \"1-2\": \"Probability value for the input. Values are between 0–1.\",\n    \"1-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 2\n}\n[/block]\n###Rate Limit Headers##\n\nAny time you make an API call to the `/predict` resource, your rate limit information is returned in the header. The rate limit headers specify your prediction usage for the current calendar month only.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"X-RateLimit-Limit 1000\\nX-RateLimit-Remaining 997\\nX-RateLimit-Reset 2017-04-01 19:31:42.0\",\n      \"language\": \"text\"\n    }\n  ]\n}\n[/block]\n\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Header\",\n    \"h-1\": \"Description\",\n    \"h-2\": \"Example\",\n    \"0-0\": \"`X-RateLimit-Limit`\",\n    \"1-0\": \"`X-RateLimit-Remaining`\",\n    \"2-0\": \"`X-RateLimit-Reset`\",\n    \"0-1\": \"Maximum number of prediction calls available for the current plan month.\",\n    \"0-2\": \"1000\",\n    \"1-1\": \"Total number of prediction calls you have left for the current plan month.\",\n    \"1-2\": \"997\",\n    \"2-1\": \"Date on which your predictions are next provisioned. Always the first of the month.\",\n    \"2-2\": \"2017-04-01 22:07:40.0\"\n  },\n  \"cols\": 3,\n  \"rows\": 3\n}\n[/block]","category":"57dee8de84019d2000e95af3","createdAt":"2016-09-25T00:59:47.456Z","excerpt":"Returns a prediction for the image file specified by its URL.","githubsync":"","hidden":false,"isReference":true,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":21,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"prediction-with-image-url","sync_unique":"","title":"Prediction with Image URL","type":"post","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

postPrediction with Image URL

Returns a prediction for the image file specified by its URL.

##Request Parameters## [block:parameters] { "data": { "0-0": "`modelId`", "0-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "ID of the model that makes the prediction.", "0-3": "1.0", "1-0": "`sampleId`", "1-1": "string", "1-2": "Optional. String that you can pass in to tag the prediction.", "1-3": "1.0", "2-0": "`sampleLocation`", "2-1": "string", "2-2": "URL of the image file.", "2-3": "1.0" }, "cols": 4, "rows": 3 } [/block] Keep the following points in mind when sending an image in for prediction: - The maximum image file size you can pass to this resource is 1 MB. - The supported image file types are PNG, JPG, and JPEG. ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`message`", "0-1": "string", "0-2": "Error message. Returned only if the status is something other than successful (200).", "0-3": "1.0", "1-0": "`object`", "1-1": "string", "1-2": "Object returned; in this case, `predictresponse`.", "1-3": "1.0", "2-0": "`probabilities`", "2-1": "array", "2-2": "Probabilities for the prediction.", "2-3": "1.0", "3-0": "`sampleId`", "3-1": "string", "3-2": "Value passed in when the prediction call was made. Returned only if the parameter is provided.", "3-3": "1.0", "4-0": "`status`", "4-1": "string", "4-2": "Status of the prediction. Status of 200 means the prediction was successful.", "4-3": "1.0" }, "cols": 4, "rows": 5 } [/block] ##Probabilities Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`label`", "0-1": "string", "0-2": "Probability label for the input.", "0-3": "1.0", "1-0": "`probability`", "1-1": "float", "1-2": "Probability value for the input. Values are between 0–1.", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block] ###Rate Limit Headers## Any time you make an API call to the `/predict` resource, your rate limit information is returned in the header. The rate limit headers specify your prediction usage for the current calendar month only. [block:code] { "codes": [ { "code": "X-RateLimit-Limit 1000\nX-RateLimit-Remaining 997\nX-RateLimit-Reset 2017-04-01 19:31:42.0", "language": "text" } ] } [/block] [block:parameters] { "data": { "h-0": "Header", "h-1": "Description", "h-2": "Example", "0-0": "`X-RateLimit-Limit`", "1-0": "`X-RateLimit-Remaining`", "2-0": "`X-RateLimit-Reset`", "0-1": "Maximum number of prediction calls available for the current plan month.", "0-2": "1000", "1-1": "Total number of prediction calls you have left for the current plan month.", "1-2": "997", "2-1": "Date on which your predictions are next provisioned. Always the first of the month.", "2-2": "2017-04-01 22:07:40.0" }, "cols": 3, "rows": 3 } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



##Request Parameters## [block:parameters] { "data": { "0-0": "`modelId`", "0-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "ID of the model that makes the prediction.", "0-3": "1.0", "1-0": "`sampleId`", "1-1": "string", "1-2": "Optional. String that you can pass in to tag the prediction.", "1-3": "1.0", "2-0": "`sampleLocation`", "2-1": "string", "2-2": "URL of the image file.", "2-3": "1.0" }, "cols": 4, "rows": 3 } [/block] Keep the following points in mind when sending an image in for prediction: - The maximum image file size you can pass to this resource is 1 MB. - The supported image file types are PNG, JPG, and JPEG. ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`message`", "0-1": "string", "0-2": "Error message. Returned only if the status is something other than successful (200).", "0-3": "1.0", "1-0": "`object`", "1-1": "string", "1-2": "Object returned; in this case, `predictresponse`.", "1-3": "1.0", "2-0": "`probabilities`", "2-1": "array", "2-2": "Probabilities for the prediction.", "2-3": "1.0", "3-0": "`sampleId`", "3-1": "string", "3-2": "Value passed in when the prediction call was made. Returned only if the parameter is provided.", "3-3": "1.0", "4-0": "`status`", "4-1": "string", "4-2": "Status of the prediction. Status of 200 means the prediction was successful.", "4-3": "1.0" }, "cols": 4, "rows": 5 } [/block] ##Probabilities Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`label`", "0-1": "string", "0-2": "Probability label for the input.", "0-3": "1.0", "1-0": "`probability`", "1-1": "float", "1-2": "Probability value for the input. Values are between 0–1.", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block] ###Rate Limit Headers## Any time you make an API call to the `/predict` resource, your rate limit information is returned in the header. The rate limit headers specify your prediction usage for the current calendar month only. [block:code] { "codes": [ { "code": "X-RateLimit-Limit 1000\nX-RateLimit-Remaining 997\nX-RateLimit-Reset 2017-04-01 19:31:42.0", "language": "text" } ] } [/block] [block:parameters] { "data": { "h-0": "Header", "h-1": "Description", "h-2": "Example", "0-0": "`X-RateLimit-Limit`", "1-0": "`X-RateLimit-Remaining`", "2-0": "`X-RateLimit-Reset`", "0-1": "Maximum number of prediction calls available for the current plan month.", "0-2": "1000", "1-1": "Total number of prediction calls you have left for the current plan month.", "1-2": "997", "2-1": "Date on which your predictions are next provisioned. Always the first of the month.", "2-2": "2017-04-01 22:07:40.0" }, "cols": 3, "rows": 3 } [/block]
{"__v":0,"_id":"588259f1b0888d23007b0c85","api":{"auth":"required","examples":{"codes":[{"language":"curl","code":"curl -H \"Content-type: application/x-www-form-urlencoded\" -X POST https://api.metamind.io/v1/oauth2/token -d \"grant_type=urn:ietf:params:oauth:grant-type:jwt-bearer&assertion=<ASSERTION_STRING>\""}]},"method":"post","params":[],"results":{"codes":[{"name":"","code":"{\n  \"access_token\": \"c3d95b4bf17108680b7495d069912127d7e3cbb9\",\n  \"token_type\": \"Bearer\",\n  \"expires_in\": 9999902\n}","language":"json","status":200},{"name":"","code":"{}","language":"json","status":400}]},"settings":"","url":"/oauth2/token"},"body":"You must pass an assertion into this API call, so you first need to create a JWT payload and sign it with your private key to generate an assertion. To generate an assertion:\n\n1. Create the JWT payload. The payload is JSON that contains:\n\n - `sub`—Your email address. This is your email address contained in the Salesforce org you used to sign up for an Einstein Platform account.\n\n - `aud`—The API endpoint URL for generating a token.\n\n - `exp`—The expiration time in Unix time. This value is the current Unix time in seconds plus the number of seconds you want the token to be valid.\n\n  The JWT payload JSON looks like this.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"sub\\\": \\\"<EMAIL_ADDRESS>\\\",\\n  \\\"aud\\\": \\\"https://api.metamind.io/v1/oauth2/token\\\",\\n  \\\"exp\\\": \\\"<EXPIRATION_SECONDS_IN_UNIX_TIME>\\\"\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]\n2. Sign the JWT payload with your RSA private key to generate an assertion. The private key is contained in the `predictive_services.pem` file you downloaded when you signed up for an account. The code to generate the assertion varies depending on your programming language. \n\n3. Call the API and pass in the assertion. You pass in all the necessary data in the `-d` parameter. Replace `<ASSERTION_STRING>` with the assertion you just generated.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -H \\\"Content-type: application/x-www-form-urlencoded\\\" -X POST https://api.metamind.io/v1/oauth2/token -d \\\"grant_type=urn:ietf:params:oauth:grant-type:jwt-bearer&assertion=<ASSERTION_STRING>\\\"\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\n##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`access_token`\",\n    \"0-1\": \"string\",\n    \"0-2\": \"Token value for authorization.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`token_type`\",\n    \"1-1\": \"string\",\n    \"1-2\": \"Type of token returned. Always `Bearer`.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`expires_in`\",\n    \"2-1\": \"integer\",\n    \"2-3\": \"1.0\",\n    \"2-2\": \"Number of seconds that the token will expire from the time it was generated.\"\n  },\n  \"cols\": 4,\n  \"rows\": 3\n}\n[/block]","category":"57dee8de84019d2000e95af3","createdAt":"2017-01-20T18:41:53.835Z","excerpt":"Returns an OAuth token to access the API. You must pass a valid token in the header of each API call.","githubsync":"","hidden":false,"isReference":true,"link_external":false,"link_url":"","next":{"pages":[],"description":""},"order":22,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"generate-an-oauth-token","sync_unique":"","title":"Generate an OAuth Token","type":"post","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

postGenerate an OAuth Token

Returns an OAuth token to access the API. You must pass a valid token in the header of each API call.

You must pass an assertion into this API call, so you first need to create a JWT payload and sign it with your private key to generate an assertion. To generate an assertion: 1. Create the JWT payload. The payload is JSON that contains: - `sub`—Your email address. This is your email address contained in the Salesforce org you used to sign up for an Einstein Platform account. - `aud`—The API endpoint URL for generating a token. - `exp`—The expiration time in Unix time. This value is the current Unix time in seconds plus the number of seconds you want the token to be valid. The JWT payload JSON looks like this. [block:code] { "codes": [ { "code": "{\n \"sub\": \"<EMAIL_ADDRESS>\",\n \"aud\": \"https://api.metamind.io/v1/oauth2/token\",\n \"exp\": \"<EXPIRATION_SECONDS_IN_UNIX_TIME>\"\n}", "language": "json" } ] } [/block] 2. Sign the JWT payload with your RSA private key to generate an assertion. The private key is contained in the `predictive_services.pem` file you downloaded when you signed up for an account. The code to generate the assertion varies depending on your programming language. 3. Call the API and pass in the assertion. You pass in all the necessary data in the `-d` parameter. Replace `<ASSERTION_STRING>` with the assertion you just generated. [block:code] { "codes": [ { "code": "curl -H \"Content-type: application/x-www-form-urlencoded\" -X POST https://api.metamind.io/v1/oauth2/token -d \"grant_type=urn:ietf:params:oauth:grant-type:jwt-bearer&assertion=<ASSERTION_STRING>\"", "language": "curl" } ] } [/block] ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`access_token`", "0-1": "string", "0-2": "Token value for authorization.", "0-3": "1.0", "1-0": "`token_type`", "1-1": "string", "1-2": "Type of token returned. Always `Bearer`.", "1-3": "1.0", "2-0": "`expires_in`", "2-1": "integer", "2-3": "1.0", "2-2": "Number of seconds that the token will expire from the time it was generated." }, "cols": 4, "rows": 3 } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



You must pass an assertion into this API call, so you first need to create a JWT payload and sign it with your private key to generate an assertion. To generate an assertion: 1. Create the JWT payload. The payload is JSON that contains: - `sub`—Your email address. This is your email address contained in the Salesforce org you used to sign up for an Einstein Platform account. - `aud`—The API endpoint URL for generating a token. - `exp`—The expiration time in Unix time. This value is the current Unix time in seconds plus the number of seconds you want the token to be valid. The JWT payload JSON looks like this. [block:code] { "codes": [ { "code": "{\n \"sub\": \"<EMAIL_ADDRESS>\",\n \"aud\": \"https://api.metamind.io/v1/oauth2/token\",\n \"exp\": \"<EXPIRATION_SECONDS_IN_UNIX_TIME>\"\n}", "language": "json" } ] } [/block] 2. Sign the JWT payload with your RSA private key to generate an assertion. The private key is contained in the `predictive_services.pem` file you downloaded when you signed up for an account. The code to generate the assertion varies depending on your programming language. 3. Call the API and pass in the assertion. You pass in all the necessary data in the `-d` parameter. Replace `<ASSERTION_STRING>` with the assertion you just generated. [block:code] { "codes": [ { "code": "curl -H \"Content-type: application/x-www-form-urlencoded\" -X POST https://api.metamind.io/v1/oauth2/token -d \"grant_type=urn:ietf:params:oauth:grant-type:jwt-bearer&assertion=<ASSERTION_STRING>\"", "language": "curl" } ] } [/block] ##Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`access_token`", "0-1": "string", "0-2": "Token value for authorization.", "0-3": "1.0", "1-0": "`token_type`", "1-1": "string", "1-2": "Type of token returned. Always `Bearer`.", "1-3": "1.0", "2-0": "`expires_in`", "2-1": "integer", "2-3": "1.0", "2-2": "Number of seconds that the token will expire from the time it was generated." }, "cols": 4, "rows": 3 } [/block]
{"__v":0,"_id":"58d03ff208bcac23006e6c25","api":{"auth":"required","examples":{"codes":[{"code":"curl -X GET -H \"Authorization: Bearer <TOKEN>\" -H \"Cache-Control: no-cache\" https://api.metamind.io/v1/apiusage","language":"curl"}]},"method":"get","params":[],"results":{"codes":[{"status":200,"language":"json","code":"{\n  \"object\": \"list\",\n  \"data\": [\n    {\n      \"id\": \"489\",\n      \"organizationId\": \"108\",\n      \"startsAt\": \"2017-03-01T00:00:00.000Z\",\n      \"endsAt\": \"2017-04-01T00:00:00.000Z\",\n      \"planData\": [\n        {\n          \"plan\": \"FREE\",\n          \"amount\": 1,\n          \"source\": \"HEROKU\"\n        }\n      ],\n      \"licenseId\": \"kJCHtYDCSf\",\n      \"object\": \"apiusage\",\n      \"predictionsRemaining\": 997,\n      \"predictionsUsed\": 3,\n      \"predictionsMax\": 1000\n    }\n  ]\n}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","url":"/apiusage"},"body":"##Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"0-0\": \"`data`\",\n    \"1-0\": \"`object`\",\n    \"1-2\": \"Object returned; in this case, `list`.\",\n    \"1-1\": \"string\",\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-2\": \"Array of `apiusage` objects.\",\n    \"0-1\": \"object\",\n    \"0-3\": \"1.0\",\n    \"1-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 2\n}\n[/block]\n##Apiusage Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"1-0\": \"`id`\",\n    \"1-1\": \"long\",\n    \"1-2\": \"Unique ID for the API usage plan month.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`licenseId`\",\n    \"2-1\": \"string\",\n    \"2-2\": \"Unique ID of the API plan.\",\n    \"2-3\": \"1.0\",\n    \"3-0\": \"`object`\",\n    \"3-1\": \"string\",\n    \"3-2\": \"Object returned; in this case, `apiusage`.\",\n    \"3-3\": \"1.0\",\n    \"8-0\": \"`startsAt`\",\n    \"8-1\": \"date\",\n    \"8-2\": \"Date and time that the plan calendar month begins. Always the first of the month.\",\n    \"8-3\": \"1.0\",\n    \"0-0\": \"`endsAt`\",\n    \"0-1\": \"date\",\n    \"0-2\": \"Date and time that the plan calendar month ends. Always 12 am on the first day of the following month.\",\n    \"4-0\": \"`organizationId`\",\n    \"4-2\": \"Unique ID for the user making the API call.\",\n    \"5-0\": \"`predictionsMax`\",\n    \"5-2\": \"Total number of predictions for the calendar month.\",\n    \"5-3\": \"1.0\",\n    \"6-0\": \"`predictionsRemaining`\",\n    \"6-2\": \"Number of predictions left for the calendar month.\",\n    \"6-3\": \"1.0\",\n    \"7-0\": \"`predictionsUsed`\",\n    \"7-2\": \"Number of predictions used in the calendar month. A prediction is any call to the `/predict` resource.\",\n    \"7-3\": \"1.0\",\n    \"0-3\": \"1.0\",\n    \"4-3\": \"1.0\",\n    \"4-1\": \"long\",\n    \"5-1\": \"long\",\n    \"6-1\": \"long\",\n    \"7-1\": \"long\"\n  },\n  \"cols\": 4,\n  \"rows\": 9\n}\n[/block]\n##Plandata Response Body##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"Type\",\n    \"h-2\": \"Description\",\n    \"h-3\": \"Available Version\",\n    \"0-0\": \"`amount`\",\n    \"0-1\": \"string\",\n    \"0-2\": \"Number of plans of the specified type.\",\n    \"0-3\": \"1.0\",\n    \"1-0\": \"`plan`\",\n    \"1-1\": \"string\",\n    \"1-2\": \"Type of plan based on the `source`. Valid values:\\n- `HEROKU`\\n - `STARTER`—1,000 predictions per calendar month.\\n - `BRONZE`—10,000 predictions per calendar month.\\n - `SILVER`—250,000 predictions per calendar month.\\n - `GOLD`—One million predictions per calendar month.\\n\\n\\n- `SALESFORCE`\\n - `STARTER`—1,000 predictions per calendar month.\\n - `SFDC_1M_EDITION`—One million predictions per calendar month.\",\n    \"1-3\": \"1.0\",\n    \"2-0\": \"`source`\",\n    \"2-1\": \"string\",\n    \"2-2\": \"Service that provisioned the plan. Valid values:\\n- `HEROKU`\\n- `SALESFORCE`\",\n    \"2-3\": \"1.0\"\n  },\n  \"cols\": 4,\n  \"rows\": 3\n}\n[/block]\nEach `apiusage` object in the response contains plan information for a single calendar month for a single license. If you have a six-month paid plan and you make this call on the first month, the response contains six `apiusage` objects; one for each calendar month in the plan.\n\n- If you're using the free tier, the response contains plan information only for the current month. You see plan information only after you make your first prediction. If you call the `/apiusage` resource before you make your first prediction call, the API returns an empty array.\n- If you're using the paid tier, the response contains plan information for each month in your plan starting with the current month.\n\nThe `planData` array contains an object for each plan type associated with the calendar month and the license. This code snippet shows the `planData` if the user has two Heroku GOLD plans.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"\\\"planData\\\": [\\n       {\\n         \\\"plan\\\": \\\"GOLD\\\",\\n         \\\"amount\\\": 2,\\n         \\\"source\\\": \\\"HEROKU\\\"\\n       }\\n     ]\\n\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]","category":"57dee8de84019d2000e95af3","createdAt":"2017-03-20T20:47:46.958Z","excerpt":"Returns prediction usage on a monthly basis for the current calendar month and future months. Each `apiusage` object in the response corresponds to a calendar month in your plan. For more information about plans, see [Rate Limits](doc:rate-limits).","githubsync":"","hidden":false,"isReference":true,"link_external":false,"link_url":"","next":{"pages":[],"description":""},"order":23,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"get-api-usage","sync_unique":"","title":"Get API Usage","type":"get","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

getGet API Usage

Returns prediction usage on a monthly basis for the current calendar month and future months. Each `apiusage` object in the response corresponds to a calendar month in your plan. For more information about plans, see [Rate Limits](doc:rate-limits).

##Response Body## [block:parameters] { "data": { "0-0": "`data`", "1-0": "`object`", "1-2": "Object returned; in this case, `list`.", "1-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "Array of `apiusage` objects.", "0-1": "object", "0-3": "1.0", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block] ##Apiusage Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`id`", "1-1": "long", "1-2": "Unique ID for the API usage plan month.", "1-3": "1.0", "2-0": "`licenseId`", "2-1": "string", "2-2": "Unique ID of the API plan.", "2-3": "1.0", "3-0": "`object`", "3-1": "string", "3-2": "Object returned; in this case, `apiusage`.", "3-3": "1.0", "8-0": "`startsAt`", "8-1": "date", "8-2": "Date and time that the plan calendar month begins. Always the first of the month.", "8-3": "1.0", "0-0": "`endsAt`", "0-1": "date", "0-2": "Date and time that the plan calendar month ends. Always 12 am on the first day of the following month.", "4-0": "`organizationId`", "4-2": "Unique ID for the user making the API call.", "5-0": "`predictionsMax`", "5-2": "Total number of predictions for the calendar month.", "5-3": "1.0", "6-0": "`predictionsRemaining`", "6-2": "Number of predictions left for the calendar month.", "6-3": "1.0", "7-0": "`predictionsUsed`", "7-2": "Number of predictions used in the calendar month. A prediction is any call to the `/predict` resource.", "7-3": "1.0", "0-3": "1.0", "4-3": "1.0", "4-1": "long", "5-1": "long", "6-1": "long", "7-1": "long" }, "cols": 4, "rows": 9 } [/block] ##Plandata Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`amount`", "0-1": "string", "0-2": "Number of plans of the specified type.", "0-3": "1.0", "1-0": "`plan`", "1-1": "string", "1-2": "Type of plan based on the `source`. Valid values:\n- `HEROKU`\n - `STARTER`—1,000 predictions per calendar month.\n - `BRONZE`—10,000 predictions per calendar month.\n - `SILVER`—250,000 predictions per calendar month.\n - `GOLD`—One million predictions per calendar month.\n\n\n- `SALESFORCE`\n - `STARTER`—1,000 predictions per calendar month.\n - `SFDC_1M_EDITION`—One million predictions per calendar month.", "1-3": "1.0", "2-0": "`source`", "2-1": "string", "2-2": "Service that provisioned the plan. Valid values:\n- `HEROKU`\n- `SALESFORCE`", "2-3": "1.0" }, "cols": 4, "rows": 3 } [/block] Each `apiusage` object in the response contains plan information for a single calendar month for a single license. If you have a six-month paid plan and you make this call on the first month, the response contains six `apiusage` objects; one for each calendar month in the plan. - If you're using the free tier, the response contains plan information only for the current month. You see plan information only after you make your first prediction. If you call the `/apiusage` resource before you make your first prediction call, the API returns an empty array. - If you're using the paid tier, the response contains plan information for each month in your plan starting with the current month. The `planData` array contains an object for each plan type associated with the calendar month and the license. This code snippet shows the `planData` if the user has two Heroku GOLD plans. [block:code] { "codes": [ { "code": "\"planData\": [\n {\n \"plan\": \"GOLD\",\n \"amount\": 2,\n \"source\": \"HEROKU\"\n }\n ]\n", "language": "json" } ] } [/block]

Definition

{{ api_url }}{{ page_api_url }}

Examples


Result Format



##Response Body## [block:parameters] { "data": { "0-0": "`data`", "1-0": "`object`", "1-2": "Object returned; in this case, `list`.", "1-1": "string", "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-2": "Array of `apiusage` objects.", "0-1": "object", "0-3": "1.0", "1-3": "1.0" }, "cols": 4, "rows": 2 } [/block] ##Apiusage Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "1-0": "`id`", "1-1": "long", "1-2": "Unique ID for the API usage plan month.", "1-3": "1.0", "2-0": "`licenseId`", "2-1": "string", "2-2": "Unique ID of the API plan.", "2-3": "1.0", "3-0": "`object`", "3-1": "string", "3-2": "Object returned; in this case, `apiusage`.", "3-3": "1.0", "8-0": "`startsAt`", "8-1": "date", "8-2": "Date and time that the plan calendar month begins. Always the first of the month.", "8-3": "1.0", "0-0": "`endsAt`", "0-1": "date", "0-2": "Date and time that the plan calendar month ends. Always 12 am on the first day of the following month.", "4-0": "`organizationId`", "4-2": "Unique ID for the user making the API call.", "5-0": "`predictionsMax`", "5-2": "Total number of predictions for the calendar month.", "5-3": "1.0", "6-0": "`predictionsRemaining`", "6-2": "Number of predictions left for the calendar month.", "6-3": "1.0", "7-0": "`predictionsUsed`", "7-2": "Number of predictions used in the calendar month. A prediction is any call to the `/predict` resource.", "7-3": "1.0", "0-3": "1.0", "4-3": "1.0", "4-1": "long", "5-1": "long", "6-1": "long", "7-1": "long" }, "cols": 4, "rows": 9 } [/block] ##Plandata Response Body## [block:parameters] { "data": { "h-0": "Name", "h-1": "Type", "h-2": "Description", "h-3": "Available Version", "0-0": "`amount`", "0-1": "string", "0-2": "Number of plans of the specified type.", "0-3": "1.0", "1-0": "`plan`", "1-1": "string", "1-2": "Type of plan based on the `source`. Valid values:\n- `HEROKU`\n - `STARTER`—1,000 predictions per calendar month.\n - `BRONZE`—10,000 predictions per calendar month.\n - `SILVER`—250,000 predictions per calendar month.\n - `GOLD`—One million predictions per calendar month.\n\n\n- `SALESFORCE`\n - `STARTER`—1,000 predictions per calendar month.\n - `SFDC_1M_EDITION`—One million predictions per calendar month.", "1-3": "1.0", "2-0": "`source`", "2-1": "string", "2-2": "Service that provisioned the plan. Valid values:\n- `HEROKU`\n- `SALESFORCE`", "2-3": "1.0" }, "cols": 4, "rows": 3 } [/block] Each `apiusage` object in the response contains plan information for a single calendar month for a single license. If you have a six-month paid plan and you make this call on the first month, the response contains six `apiusage` objects; one for each calendar month in the plan. - If you're using the free tier, the response contains plan information only for the current month. You see plan information only after you make your first prediction. If you call the `/apiusage` resource before you make your first prediction call, the API returns an empty array. - If you're using the paid tier, the response contains plan information for each month in your plan starting with the current month. The `planData` array contains an object for each plan type associated with the calendar month and the license. This code snippet shows the `planData` if the user has two Heroku GOLD plans. [block:code] { "codes": [ { "code": "\"planData\": [\n {\n \"plan\": \"GOLD\",\n \"amount\": 2,\n \"source\": \"HEROKU\"\n }\n ]\n", "language": "json" } ] } [/block]
{"__v":0,"_id":"5824d46961ad6b2d0030f9d1","api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"body":"Known errors are returned in the response body in this format.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"message\\\": \\\"Invalid authentication scheme\\\"\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]\n##All##\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"HTTP Code\",\n    \"h-1\": \"HTTP Message\",\n    \"h-2\": \"API Message\",\n    \"0-0\": \"401\",\n    \"0-1\": \"Unauthorized\",\n    \"0-2\": \"Invalid access token\",\n    \"0-3\": \"Any\",\n    \"0-4\": \"The access token is expired.\",\n    \"h-3\": \"Resource\",\n    \"h-4\": \"Possible Causes\",\n    \"1-0\": \"401\",\n    \"1-1\": \"Unauthorized\",\n    \"1-2\": \"Invalid authentication scheme\",\n    \"1-3\": \"Any\",\n    \"1-4\": \"An `Authorization` header was provided, but the token isn't properly formatted.\",\n    \"2-0\": \"5XX\",\n    \"2-1\": \"- Internal server error \\n- Service unavailable\",\n    \"2-2\": \"None\",\n    \"2-3\": \"Any\",\n    \"2-4\": \"Our systems encountered and logged an unexpected error. Please contact us if you continue to see the error.\"\n  },\n  \"cols\": 5,\n  \"rows\": 3\n}\n[/block]\n##Datasets##\nError codes that can occur when you access datasets, labels, or examples.\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"HTTP Code\",\n    \"h-1\": \"HTTP Message\",\n    \"h-2\": \"API Message\",\n    \"h-3\": \"Resource\",\n    \"h-4\": \"Possible Causes\",\n    \"0-0\": \"400\",\n    \"0-1\": \"Bad Request\",\n    \"0-4\": \"The request couldn’t be fulfilled because the HTTP request was malformed, the Content Type was incorrect, there were missing parameters, or a parameter was provided with an invalid value.\",\n    \"1-0\": \"400\",\n    \"1-1\": \"Bad Request\",\n    \"1-2\": \"The 'name' parameter is required to create a dataset.\",\n    \"1-3\": \"POST \\n`/vision/datasets`\",\n    \"1-4\": \"The `name` parameter was passed in, but no value was provided.\",\n    \"0-2\": \"None\",\n    \"2-2\": \"Label name <NAME_OF_LABEL> already present for dataset\",\n    \"2-0\": \"400\",\n    \"2-1\": \"Bad Request\",\n    \"2-3\": \"POST\\n`/vision/datasets/datasetId/labels`\",\n    \"2-4\": \"A label with the same name already exists in the dataset. Label names must be unique within a dataset.\",\n    \"3-0\": \"400\",\n    \"3-1\": \"Bad Request\",\n    \"3-2\": \"Uploading a dataset requires either the 'data' field or the 'path' field.\",\n    \"3-3\": \"POST\\n`/vision/datasets/upload`\",\n    \"3-4\": \"The path to the local .zip file or the URL to the .zip file in the cloud wasn’t specified.\",\n    \"4-0\": \"400\",\n    \"4-1\": \"Bad Request\",\n    \"4-2\": \"The 'data' parameter cannot be duplicated, nor sent along with the 'path' parameter.\",\n    \"4-3\": \"POST \\n`/vision/datasets/upload`\",\n    \"4-4\": \"Both the `path` and `data` parameters were passed to the call; only one of these parameters can be passed.\",\n    \"5-0\": \"400\",\n    \"5-1\": \"Bad Request\",\n    \"5-2\": \"The dataset is not yet available for update, try again once the dataset is ready.\",\n    \"5-3\": \"PUT \\n`/vision/datasets/<DATASET_ID>/upload`\",\n    \"5-4\": \"You’re adding examples to a dataset that’s currently being created. You must wait for dataset to become available before you can add examples to it.\",\n    \"6-0\": \"400\",\n    \"6-1\": \"Bad Request\",\n    \"6-2\": \"Example max size supported is 1024000\",\n    \"6-3\": \"POST \\n`/vision/datasets/<DATASET_ID>/examples`\",\n    \"6-4\": \"The image file being added as an example exceeds the maximum file size of 1 MB.\",\n    \"7-0\": \"404\",\n    \"7-1\": \"Not Found\",\n    \"7-4\": \"The requested REST resource doesn’t exist or you don't have permission to access the resource.\",\n    \"0-3\": \"Any dataset, label, or example resources.\",\n    \"7-2\": \"None\",\n    \"7-3\": \"Any dataset, label, or example resources.\",\n    \"8-0\": \"404\",\n    \"8-1\": \"Not Found\",\n    \"8-2\": \"Unable to find dataset.\",\n    \"8-4\": \"- You don’t have access to the dataset.\\n\\n- The dataset was deleted.\",\n    \"8-3\": \"GET \\n`/vision/datasets/<DATASET_ID>`\",\n    \"9-0\": \"404\",\n    \"9-1\": \"Not Found\",\n    \"9-2\": \"Unable to find dataset.\",\n    \"9-3\": \"DELETE \\n`/vision/datasets/<DATASET_ID>`\",\n    \"9-4\": \"- You don’t have access to the dataset.\\n\\n- The dataset was already deleted.\"\n  },\n  \"cols\": 5,\n  \"rows\": 10\n}\n[/block]\n##Training##\nError codes that can occur when you train a dataset to create a model or access a model.\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"HTTP Code\",\n    \"h-1\": \"HTTP Message\",\n    \"h-2\": \"API Message\",\n    \"h-3\": \"Resource\",\n    \"h-4\": \"Possible Causes\",\n    \"0-0\": \"400\",\n    \"0-1\": \"Bad Request\",\n    \"0-2\": \"- The 'name' parameter is required to train.\\n\\n- A valid 'datasetId' parameter is required to create a example.\",\n    \"0-3\": \"POST \\n`/vision/train`\",\n    \"0-4\": \"The `name` or `datasetId` parameter was passed in, but no parameter value was provided.\",\n    \"1-0\": \"400\",\n    \"1-1\": \"Bad Request\",\n    \"1-2\": \"The 'name', and 'datasetId' parameters are required to train.\",\n    \"1-3\": \"POST \\n`/vision/train`\",\n    \"1-4\": \"The `name` or `datasetId` parameter is missing.\",\n    \"2-0\": \"400\",\n    \"2-1\": \"Bad Request\",\n    \"2-2\": \"Invalid id <MODEL_ID>\",\n    \"2-3\": \"GET \\n`/vision/train/<MODEL_ID>`\",\n    \"2-4\": \"There’s no model with an ID that matches the `modelId` parameter.\",\n    \"3-0\": \"400\",\n    \"3-1\": \"Bad Request\",\n    \"3-2\": \"Invalid id <MODEL_ID>\",\n    \"3-3\": \"GET \\n`/vision/train/<MODEL_ID>/lc`\",\n    \"3-4\": \"There’s no model with an ID that matches the `modelId` parameter.\",\n    \"4-0\": \"400\",\n    \"4-1\": \"Bad Request\",\n    \"4-2\": \"- The job has not terminated yet; its current status is RUNNING.\\n\\n- The job has not terminated yet; its current status is QUEUED.\",\n    \"4-3\": \"GET \\n`/vision/models/<MODEL_ID>`\",\n    \"4-4\": \"The model for which you are getting metrics hasn’t completed training.\",\n    \"5-0\": \"404\",\n    \"5-1\": \"Not Found\",\n    \"5-2\": \"None\",\n    \"5-3\": \"GET \\n`/vision/models/<MODEL_ID>`\",\n    \"5-4\": \"The `modelId` parameter is missing.\",\n    \"6-0\": \"405\",\n    \"6-1\": \"Method Not Allowed\",\n    \"6-2\": \"None\",\n    \"6-3\": \"GET \\n`/vision/train/<MODEL_ID>`\",\n    \"6-4\": \"The `modelId` parameter is missing.\"\n  },\n  \"cols\": 5,\n  \"rows\": 7\n}\n[/block]\n##Prediction##\nError codes that can occur when you make a prediction.\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"HTTP Code\",\n    \"h-1\": \"HTTP Message\",\n    \"h-2\": \"API Message\",\n    \"h-3\": \"Resource\",\n    \"h-4\": \"Possible Causes\",\n    \"0-0\": \"400\",\n    \"0-1\": \"Bad Request\",\n    \"0-2\": \"None\",\n    \"0-4\": \"The prediction request couldn’t be fulfilled because the HTTP request was malformed, the Content Type was incorrect, there were missing parameters, or a parameter was provided with an invalid value.\",\n    \"0-3\": \"POST\\n`/vision/predict`\",\n    \"1-0\": \"400\",\n    \"1-1\": \"Bad Request\",\n    \"1-2\": \"Bad Request: Bad sampleLocation\",\n    \"1-3\": \"POST\\n`/vision/predict`\",\n    \"1-4\": \"The URL passed in the `sampleLocation` parameter is invalid. The URL could be incorrect, contain the wrong file name, or the file may have been moved.\",\n    \"2-0\": \"400\",\n    \"2-1\": \"Bad Request\",\n    \"2-2\": \"The modelId parameter is required.\",\n    \"2-3\": \"POST\\n`/vision/predict`\",\n    \"2-4\": \"The `modelId` parameter is missing.\",\n    \"3-0\": \"400\",\n    \"3-1\": \"Bad Request\",\n    \"3-2\": \"Bad Request: Missing sampleLocation, sampleBase64Content, and sampleContent\",\n    \"3-3\": \"POST\\n`/vision/predict`\",\n    \"3-4\": \"The parameter that specifies the image to predict is missing.\",\n    \"4-0\": \"400\",\n    \"4-1\": \"Bad Request\",\n    \"4-2\": \"File size limit exceeded\",\n    \"4-3\": \"POST\\n`/vision/predict`\",\n    \"4-4\": \"The file you passed in for prediction exceeds the maximum file size limit of 1 MB.\",\n    \"5-0\": \"400\",\n    \"5-1\": \"Bad Request\",\n    \"5-2\": \"Bad Request: Unsupported sample file format\",\n    \"5-3\": \"POST\\n`/vision/predict`\",\n    \"5-4\": \"The file you passed in for prediction isn’t one of the supported file types.\",\n    \"6-0\": \"403\",\n    \"6-1\": \"Forbidden\",\n    \"6-2\": \"Forbidden!\",\n    \"6-3\": \"POST\\n`/vision/predict`\",\n    \"6-4\": \"- The model specified by the `modelId` parameter doesn’t exist.\\n\\n- The `modelId` parameter was passed in but no value was provided.\"\n  },\n  \"cols\": 5,\n  \"rows\": 7\n}\n[/block]","category":"57dee8de84019d2000e95af3","createdAt":"2016-11-10T20:11:21.728Z","excerpt":"If an API call is unsuccessful, it returns an HTTP error code. If the error is known, you receive a message in the response body.","githubsync":"","hidden":false,"isReference":true,"link_external":false,"link_url":"","next":{"pages":[],"description":""},"order":24,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"api-error-codes-and-messages","sync_unique":"","title":"API Error Codes and Messages","type":"basic","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

API Error Codes and Messages

If an API call is unsuccessful, it returns an HTTP error code. If the error is known, you receive a message in the response body.

Known errors are returned in the response body in this format. [block:code] { "codes": [ { "code": "{\n \"message\": \"Invalid authentication scheme\"\n}", "language": "json" } ] } [/block] ##All## [block:parameters] { "data": { "h-0": "HTTP Code", "h-1": "HTTP Message", "h-2": "API Message", "0-0": "401", "0-1": "Unauthorized", "0-2": "Invalid access token", "0-3": "Any", "0-4": "The access token is expired.", "h-3": "Resource", "h-4": "Possible Causes", "1-0": "401", "1-1": "Unauthorized", "1-2": "Invalid authentication scheme", "1-3": "Any", "1-4": "An `Authorization` header was provided, but the token isn't properly formatted.", "2-0": "5XX", "2-1": "- Internal server error \n- Service unavailable", "2-2": "None", "2-3": "Any", "2-4": "Our systems encountered and logged an unexpected error. Please contact us if you continue to see the error." }, "cols": 5, "rows": 3 } [/block] ##Datasets## Error codes that can occur when you access datasets, labels, or examples. [block:parameters] { "data": { "h-0": "HTTP Code", "h-1": "HTTP Message", "h-2": "API Message", "h-3": "Resource", "h-4": "Possible Causes", "0-0": "400", "0-1": "Bad Request", "0-4": "The request couldn’t be fulfilled because the HTTP request was malformed, the Content Type was incorrect, there were missing parameters, or a parameter was provided with an invalid value.", "1-0": "400", "1-1": "Bad Request", "1-2": "The 'name' parameter is required to create a dataset.", "1-3": "POST \n`/vision/datasets`", "1-4": "The `name` parameter was passed in, but no value was provided.", "0-2": "None", "2-2": "Label name <NAME_OF_LABEL> already present for dataset", "2-0": "400", "2-1": "Bad Request", "2-3": "POST\n`/vision/datasets/datasetId/labels`", "2-4": "A label with the same name already exists in the dataset. Label names must be unique within a dataset.", "3-0": "400", "3-1": "Bad Request", "3-2": "Uploading a dataset requires either the 'data' field or the 'path' field.", "3-3": "POST\n`/vision/datasets/upload`", "3-4": "The path to the local .zip file or the URL to the .zip file in the cloud wasn’t specified.", "4-0": "400", "4-1": "Bad Request", "4-2": "The 'data' parameter cannot be duplicated, nor sent along with the 'path' parameter.", "4-3": "POST \n`/vision/datasets/upload`", "4-4": "Both the `path` and `data` parameters were passed to the call; only one of these parameters can be passed.", "5-0": "400", "5-1": "Bad Request", "5-2": "The dataset is not yet available for update, try again once the dataset is ready.", "5-3": "PUT \n`/vision/datasets/<DATASET_ID>/upload`", "5-4": "You’re adding examples to a dataset that’s currently being created. You must wait for dataset to become available before you can add examples to it.", "6-0": "400", "6-1": "Bad Request", "6-2": "Example max size supported is 1024000", "6-3": "POST \n`/vision/datasets/<DATASET_ID>/examples`", "6-4": "The image file being added as an example exceeds the maximum file size of 1 MB.", "7-0": "404", "7-1": "Not Found", "7-4": "The requested REST resource doesn’t exist or you don't have permission to access the resource.", "0-3": "Any dataset, label, or example resources.", "7-2": "None", "7-3": "Any dataset, label, or example resources.", "8-0": "404", "8-1": "Not Found", "8-2": "Unable to find dataset.", "8-4": "- You don’t have access to the dataset.\n\n- The dataset was deleted.", "8-3": "GET \n`/vision/datasets/<DATASET_ID>`", "9-0": "404", "9-1": "Not Found", "9-2": "Unable to find dataset.", "9-3": "DELETE \n`/vision/datasets/<DATASET_ID>`", "9-4": "- You don’t have access to the dataset.\n\n- The dataset was already deleted." }, "cols": 5, "rows": 10 } [/block] ##Training## Error codes that can occur when you train a dataset to create a model or access a model. [block:parameters] { "data": { "h-0": "HTTP Code", "h-1": "HTTP Message", "h-2": "API Message", "h-3": "Resource", "h-4": "Possible Causes", "0-0": "400", "0-1": "Bad Request", "0-2": "- The 'name' parameter is required to train.\n\n- A valid 'datasetId' parameter is required to create a example.", "0-3": "POST \n`/vision/train`", "0-4": "The `name` or `datasetId` parameter was passed in, but no parameter value was provided.", "1-0": "400", "1-1": "Bad Request", "1-2": "The 'name', and 'datasetId' parameters are required to train.", "1-3": "POST \n`/vision/train`", "1-4": "The `name` or `datasetId` parameter is missing.", "2-0": "400", "2-1": "Bad Request", "2-2": "Invalid id <MODEL_ID>", "2-3": "GET \n`/vision/train/<MODEL_ID>`", "2-4": "There’s no model with an ID that matches the `modelId` parameter.", "3-0": "400", "3-1": "Bad Request", "3-2": "Invalid id <MODEL_ID>", "3-3": "GET \n`/vision/train/<MODEL_ID>/lc`", "3-4": "There’s no model with an ID that matches the `modelId` parameter.", "4-0": "400", "4-1": "Bad Request", "4-2": "- The job has not terminated yet; its current status is RUNNING.\n\n- The job has not terminated yet; its current status is QUEUED.", "4-3": "GET \n`/vision/models/<MODEL_ID>`", "4-4": "The model for which you are getting metrics hasn’t completed training.", "5-0": "404", "5-1": "Not Found", "5-2": "None", "5-3": "GET \n`/vision/models/<MODEL_ID>`", "5-4": "The `modelId` parameter is missing.", "6-0": "405", "6-1": "Method Not Allowed", "6-2": "None", "6-3": "GET \n`/vision/train/<MODEL_ID>`", "6-4": "The `modelId` parameter is missing." }, "cols": 5, "rows": 7 } [/block] ##Prediction## Error codes that can occur when you make a prediction. [block:parameters] { "data": { "h-0": "HTTP Code", "h-1": "HTTP Message", "h-2": "API Message", "h-3": "Resource", "h-4": "Possible Causes", "0-0": "400", "0-1": "Bad Request", "0-2": "None", "0-4": "The prediction request couldn’t be fulfilled because the HTTP request was malformed, the Content Type was incorrect, there were missing parameters, or a parameter was provided with an invalid value.", "0-3": "POST\n`/vision/predict`", "1-0": "400", "1-1": "Bad Request", "1-2": "Bad Request: Bad sampleLocation", "1-3": "POST\n`/vision/predict`", "1-4": "The URL passed in the `sampleLocation` parameter is invalid. The URL could be incorrect, contain the wrong file name, or the file may have been moved.", "2-0": "400", "2-1": "Bad Request", "2-2": "The modelId parameter is required.", "2-3": "POST\n`/vision/predict`", "2-4": "The `modelId` parameter is missing.", "3-0": "400", "3-1": "Bad Request", "3-2": "Bad Request: Missing sampleLocation, sampleBase64Content, and sampleContent", "3-3": "POST\n`/vision/predict`", "3-4": "The parameter that specifies the image to predict is missing.", "4-0": "400", "4-1": "Bad Request", "4-2": "File size limit exceeded", "4-3": "POST\n`/vision/predict`", "4-4": "The file you passed in for prediction exceeds the maximum file size limit of 1 MB.", "5-0": "400", "5-1": "Bad Request", "5-2": "Bad Request: Unsupported sample file format", "5-3": "POST\n`/vision/predict`", "5-4": "The file you passed in for prediction isn’t one of the supported file types.", "6-0": "403", "6-1": "Forbidden", "6-2": "Forbidden!", "6-3": "POST\n`/vision/predict`", "6-4": "- The model specified by the `modelId` parameter doesn’t exist.\n\n- The `modelId` parameter was passed in but no value was provided." }, "cols": 5, "rows": 7 } [/block]
Known errors are returned in the response body in this format. [block:code] { "codes": [ { "code": "{\n \"message\": \"Invalid authentication scheme\"\n}", "language": "json" } ] } [/block] ##All## [block:parameters] { "data": { "h-0": "HTTP Code", "h-1": "HTTP Message", "h-2": "API Message", "0-0": "401", "0-1": "Unauthorized", "0-2": "Invalid access token", "0-3": "Any", "0-4": "The access token is expired.", "h-3": "Resource", "h-4": "Possible Causes", "1-0": "401", "1-1": "Unauthorized", "1-2": "Invalid authentication scheme", "1-3": "Any", "1-4": "An `Authorization` header was provided, but the token isn't properly formatted.", "2-0": "5XX", "2-1": "- Internal server error \n- Service unavailable", "2-2": "None", "2-3": "Any", "2-4": "Our systems encountered and logged an unexpected error. Please contact us if you continue to see the error." }, "cols": 5, "rows": 3 } [/block] ##Datasets## Error codes that can occur when you access datasets, labels, or examples. [block:parameters] { "data": { "h-0": "HTTP Code", "h-1": "HTTP Message", "h-2": "API Message", "h-3": "Resource", "h-4": "Possible Causes", "0-0": "400", "0-1": "Bad Request", "0-4": "The request couldn’t be fulfilled because the HTTP request was malformed, the Content Type was incorrect, there were missing parameters, or a parameter was provided with an invalid value.", "1-0": "400", "1-1": "Bad Request", "1-2": "The 'name' parameter is required to create a dataset.", "1-3": "POST \n`/vision/datasets`", "1-4": "The `name` parameter was passed in, but no value was provided.", "0-2": "None", "2-2": "Label name <NAME_OF_LABEL> already present for dataset", "2-0": "400", "2-1": "Bad Request", "2-3": "POST\n`/vision/datasets/datasetId/labels`", "2-4": "A label with the same name already exists in the dataset. Label names must be unique within a dataset.", "3-0": "400", "3-1": "Bad Request", "3-2": "Uploading a dataset requires either the 'data' field or the 'path' field.", "3-3": "POST\n`/vision/datasets/upload`", "3-4": "The path to the local .zip file or the URL to the .zip file in the cloud wasn’t specified.", "4-0": "400", "4-1": "Bad Request", "4-2": "The 'data' parameter cannot be duplicated, nor sent along with the 'path' parameter.", "4-3": "POST \n`/vision/datasets/upload`", "4-4": "Both the `path` and `data` parameters were passed to the call; only one of these parameters can be passed.", "5-0": "400", "5-1": "Bad Request", "5-2": "The dataset is not yet available for update, try again once the dataset is ready.", "5-3": "PUT \n`/vision/datasets/<DATASET_ID>/upload`", "5-4": "You’re adding examples to a dataset that’s currently being created. You must wait for dataset to become available before you can add examples to it.", "6-0": "400", "6-1": "Bad Request", "6-2": "Example max size supported is 1024000", "6-3": "POST \n`/vision/datasets/<DATASET_ID>/examples`", "6-4": "The image file being added as an example exceeds the maximum file size of 1 MB.", "7-0": "404", "7-1": "Not Found", "7-4": "The requested REST resource doesn’t exist or you don't have permission to access the resource.", "0-3": "Any dataset, label, or example resources.", "7-2": "None", "7-3": "Any dataset, label, or example resources.", "8-0": "404", "8-1": "Not Found", "8-2": "Unable to find dataset.", "8-4": "- You don’t have access to the dataset.\n\n- The dataset was deleted.", "8-3": "GET \n`/vision/datasets/<DATASET_ID>`", "9-0": "404", "9-1": "Not Found", "9-2": "Unable to find dataset.", "9-3": "DELETE \n`/vision/datasets/<DATASET_ID>`", "9-4": "- You don’t have access to the dataset.\n\n- The dataset was already deleted." }, "cols": 5, "rows": 10 } [/block] ##Training## Error codes that can occur when you train a dataset to create a model or access a model. [block:parameters] { "data": { "h-0": "HTTP Code", "h-1": "HTTP Message", "h-2": "API Message", "h-3": "Resource", "h-4": "Possible Causes", "0-0": "400", "0-1": "Bad Request", "0-2": "- The 'name' parameter is required to train.\n\n- A valid 'datasetId' parameter is required to create a example.", "0-3": "POST \n`/vision/train`", "0-4": "The `name` or `datasetId` parameter was passed in, but no parameter value was provided.", "1-0": "400", "1-1": "Bad Request", "1-2": "The 'name', and 'datasetId' parameters are required to train.", "1-3": "POST \n`/vision/train`", "1-4": "The `name` or `datasetId` parameter is missing.", "2-0": "400", "2-1": "Bad Request", "2-2": "Invalid id <MODEL_ID>", "2-3": "GET \n`/vision/train/<MODEL_ID>`", "2-4": "There’s no model with an ID that matches the `modelId` parameter.", "3-0": "400", "3-1": "Bad Request", "3-2": "Invalid id <MODEL_ID>", "3-3": "GET \n`/vision/train/<MODEL_ID>/lc`", "3-4": "There’s no model with an ID that matches the `modelId` parameter.", "4-0": "400", "4-1": "Bad Request", "4-2": "- The job has not terminated yet; its current status is RUNNING.\n\n- The job has not terminated yet; its current status is QUEUED.", "4-3": "GET \n`/vision/models/<MODEL_ID>`", "4-4": "The model for which you are getting metrics hasn’t completed training.", "5-0": "404", "5-1": "Not Found", "5-2": "None", "5-3": "GET \n`/vision/models/<MODEL_ID>`", "5-4": "The `modelId` parameter is missing.", "6-0": "405", "6-1": "Method Not Allowed", "6-2": "None", "6-3": "GET \n`/vision/train/<MODEL_ID>`", "6-4": "The `modelId` parameter is missing." }, "cols": 5, "rows": 7 } [/block] ##Prediction## Error codes that can occur when you make a prediction. [block:parameters] { "data": { "h-0": "HTTP Code", "h-1": "HTTP Message", "h-2": "API Message", "h-3": "Resource", "h-4": "Possible Causes", "0-0": "400", "0-1": "Bad Request", "0-2": "None", "0-4": "The prediction request couldn’t be fulfilled because the HTTP request was malformed, the Content Type was incorrect, there were missing parameters, or a parameter was provided with an invalid value.", "0-3": "POST\n`/vision/predict`", "1-0": "400", "1-1": "Bad Request", "1-2": "Bad Request: Bad sampleLocation", "1-3": "POST\n`/vision/predict`", "1-4": "The URL passed in the `sampleLocation` parameter is invalid. The URL could be incorrect, contain the wrong file name, or the file may have been moved.", "2-0": "400", "2-1": "Bad Request", "2-2": "The modelId parameter is required.", "2-3": "POST\n`/vision/predict`", "2-4": "The `modelId` parameter is missing.", "3-0": "400", "3-1": "Bad Request", "3-2": "Bad Request: Missing sampleLocation, sampleBase64Content, and sampleContent", "3-3": "POST\n`/vision/predict`", "3-4": "The parameter that specifies the image to predict is missing.", "4-0": "400", "4-1": "Bad Request", "4-2": "File size limit exceeded", "4-3": "POST\n`/vision/predict`", "4-4": "The file you passed in for prediction exceeds the maximum file size limit of 1 MB.", "5-0": "400", "5-1": "Bad Request", "5-2": "Bad Request: Unsupported sample file format", "5-3": "POST\n`/vision/predict`", "5-4": "The file you passed in for prediction isn’t one of the supported file types.", "6-0": "403", "6-1": "Forbidden", "6-2": "Forbidden!", "6-3": "POST\n`/vision/predict`", "6-4": "- The model specified by the `modelId` parameter doesn’t exist.\n\n- The `modelId` parameter was passed in but no value was provided." }, "cols": 5, "rows": 7 } [/block]
{"__v":1,"_id":"57e026a7ef837d0e00fd8af2","api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"body":"- A dataset should contain at least 1,000 images per label.\n\n- Each dataset label should have about the same number of images. For example, avoid a situation where you have 1,000 images in one label and 400 in another within the same dataset.\n\n- Each dataset label should have a wide variety of images. If you have a label that contains images of a certain object, include images:\n  - In color\n  - In black and white\n  - Blurred\n  - That contain the object with other objects it might typically be seen with\n  - With text and without text (if applicable)\n\n \n- A dataset with a wide variety of images means that the model will be more accurate. For example, if you have a dataset label called “buildings,” include images of many different styles of buildings: Asian, Medieval, Renaissance, Modern, and so on.\n\n- In a binary dataset, include images in the negative label that look similar to images in the positive label. For example, if your positive label is oranges be sure to have grapefruits, tangerines, lemons, and other citrus fruits in your negative label.\n\n- As you test your model, take the false positives and false negatives and add them to your training dataset to make the model more accurate.\n\n- If your dataset changes, you must train it and create a new model.","category":"57deec8084019d2000e95af4","createdAt":"2016-09-19T17:55:51.515Z","excerpt":"","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","next":{"description":"","pages":[]},"order":0,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"dataset-and-model-best-practices","sync_unique":"","title":"Dataset and Model Best Practices","type":"basic","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

Dataset and Model Best Practices


- A dataset should contain at least 1,000 images per label. - Each dataset label should have about the same number of images. For example, avoid a situation where you have 1,000 images in one label and 400 in another within the same dataset. - Each dataset label should have a wide variety of images. If you have a label that contains images of a certain object, include images: - In color - In black and white - Blurred - That contain the object with other objects it might typically be seen with - With text and without text (if applicable) - A dataset with a wide variety of images means that the model will be more accurate. For example, if you have a dataset label called “buildings,” include images of many different styles of buildings: Asian, Medieval, Renaissance, Modern, and so on. - In a binary dataset, include images in the negative label that look similar to images in the positive label. For example, if your positive label is oranges be sure to have grapefruits, tangerines, lemons, and other citrus fruits in your negative label. - As you test your model, take the false positives and false negatives and add them to your training dataset to make the model more accurate. - If your dataset changes, you must train it and create a new model.
- A dataset should contain at least 1,000 images per label. - Each dataset label should have about the same number of images. For example, avoid a situation where you have 1,000 images in one label and 400 in another within the same dataset. - Each dataset label should have a wide variety of images. If you have a label that contains images of a certain object, include images: - In color - In black and white - Blurred - That contain the object with other objects it might typically be seen with - With text and without text (if applicable) - A dataset with a wide variety of images means that the model will be more accurate. For example, if you have a dataset label called “buildings,” include images of many different styles of buildings: Asian, Medieval, Renaissance, Modern, and so on. - In a binary dataset, include images in the negative label that look similar to images in the positive label. For example, if your positive label is oranges be sure to have grapefruits, tangerines, lemons, and other citrus fruits in your negative label. - As you test your model, take the false positives and false negatives and add them to your training dataset to make the model more accurate. - If your dataset changes, you must train it and create a new model.
{"__v":0,"_id":"58b9eb1dfba7da250056ff93","api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"body":"##[Create a Dataset and Upload Images Asynchronously From a Zip File](doc:create-a-dataset-zip-async)##\n\nThis call creates a dataset, labels, and examples from a .zip file in a single asynchronous operation. \n\n- This API call returns a response in which the `available` value is `false` and the `statusMsg` value is `UPLOADING`. \n- After you make the call, use the call to [Get a Dataset](doc:get-a-dataset) and check the `available` and `statusMsg` values. \n- When `available` is `true` and `statusMsg` is `SUCCEEDED`, all the data has been uploaded from the .zip file, and you can train the dataset to create a model.\n\nUse this call when you have a .zip file that’s 10 MB or larger. If your .zip file is more than 20 MB, for better performance, we recommend that you upload it to a cloud location that doesn't require authentication and pass in the URL.\n\n\n##[Create a Dataset and Upload Images Synchronously From a Zip File](doc:create-a-dataset-zip-sync)##\n\nThis call creates a dataset, labels, and examples from a .zip file in a single synchronous operation. \n\n- The response is returned only after the call completes. \n- The `available` and `statusMsg` fields in the response indicate whether the call was successful. If `available` is `true` and `statusMsg` is `SUCCEEDED`, all the data has been uploaded from the .zip file, and you can train the dataset to create a model.\n\nUse this call when you have a .zip file that’s less than 10 MB.\n\n###How Datasets are Created From a Zip File###\n\nWhen you create a dataset from a .zip file, the API uses the structure of the .zip file. \n- The dataset name is the name of the .zip file minus the file extension.\n- A label is created for each directory that contains images, and the label name is the same as the directory name.\n- An example is created for each image, and the example name is the same as the file name.\n\nThe .zip file can be on a local drive or accessible from a cloud location that doesn’t require authentication. You can add images to a dataset created from a .zip file only by using the [Create Examples From a Zip File](doc:create-examples-from-zip) call.\n\n\n##[Create an Empty Dataset](doc:create-a-dataset)##\n\nThis call creates an empty dataset and labels, if you pass in the optional labels parameter. To create examples in a dataset that you create using this call, you must use the [Create an Example](doc:create-an-example) call to add the images individually. \n\nAfter you create this type of dataset, you can only add images individually, you can’t add examples to the dataset from a .zip file. Therefore, we recommend that you create a dataset from a .zip file using either the asynchronous or synchronous call, depending on the amount of data.","category":"57deec8084019d2000e95af4","createdAt":"2017-03-03T22:15:57.228Z","excerpt":"There are three different API calls you can use to create a dataset. The first two APIs create a dataset from a .zip file.","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","next":{"pages":[],"description":""},"order":1,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"ways-to-create-a-dataset","sync_unique":"","title":"Ways to Create a Dataset","type":"basic","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

Ways to Create a Dataset

There are three different API calls you can use to create a dataset. The first two APIs create a dataset from a .zip file.

##[Create a Dataset and Upload Images Asynchronously From a Zip File](doc:create-a-dataset-zip-async)## This call creates a dataset, labels, and examples from a .zip file in a single asynchronous operation. - This API call returns a response in which the `available` value is `false` and the `statusMsg` value is `UPLOADING`. - After you make the call, use the call to [Get a Dataset](doc:get-a-dataset) and check the `available` and `statusMsg` values. - When `available` is `true` and `statusMsg` is `SUCCEEDED`, all the data has been uploaded from the .zip file, and you can train the dataset to create a model. Use this call when you have a .zip file that’s 10 MB or larger. If your .zip file is more than 20 MB, for better performance, we recommend that you upload it to a cloud location that doesn't require authentication and pass in the URL. ##[Create a Dataset and Upload Images Synchronously From a Zip File](doc:create-a-dataset-zip-sync)## This call creates a dataset, labels, and examples from a .zip file in a single synchronous operation. - The response is returned only after the call completes. - The `available` and `statusMsg` fields in the response indicate whether the call was successful. If `available` is `true` and `statusMsg` is `SUCCEEDED`, all the data has been uploaded from the .zip file, and you can train the dataset to create a model. Use this call when you have a .zip file that’s less than 10 MB. ###How Datasets are Created From a Zip File### When you create a dataset from a .zip file, the API uses the structure of the .zip file. - The dataset name is the name of the .zip file minus the file extension. - A label is created for each directory that contains images, and the label name is the same as the directory name. - An example is created for each image, and the example name is the same as the file name. The .zip file can be on a local drive or accessible from a cloud location that doesn’t require authentication. You can add images to a dataset created from a .zip file only by using the [Create Examples From a Zip File](doc:create-examples-from-zip) call. ##[Create an Empty Dataset](doc:create-a-dataset)## This call creates an empty dataset and labels, if you pass in the optional labels parameter. To create examples in a dataset that you create using this call, you must use the [Create an Example](doc:create-an-example) call to add the images individually. After you create this type of dataset, you can only add images individually, you can’t add examples to the dataset from a .zip file. Therefore, we recommend that you create a dataset from a .zip file using either the asynchronous or synchronous call, depending on the amount of data.
##[Create a Dataset and Upload Images Asynchronously From a Zip File](doc:create-a-dataset-zip-async)## This call creates a dataset, labels, and examples from a .zip file in a single asynchronous operation. - This API call returns a response in which the `available` value is `false` and the `statusMsg` value is `UPLOADING`. - After you make the call, use the call to [Get a Dataset](doc:get-a-dataset) and check the `available` and `statusMsg` values. - When `available` is `true` and `statusMsg` is `SUCCEEDED`, all the data has been uploaded from the .zip file, and you can train the dataset to create a model. Use this call when you have a .zip file that’s 10 MB or larger. If your .zip file is more than 20 MB, for better performance, we recommend that you upload it to a cloud location that doesn't require authentication and pass in the URL. ##[Create a Dataset and Upload Images Synchronously From a Zip File](doc:create-a-dataset-zip-sync)## This call creates a dataset, labels, and examples from a .zip file in a single synchronous operation. - The response is returned only after the call completes. - The `available` and `statusMsg` fields in the response indicate whether the call was successful. If `available` is `true` and `statusMsg` is `SUCCEEDED`, all the data has been uploaded from the .zip file, and you can train the dataset to create a model. Use this call when you have a .zip file that’s less than 10 MB. ###How Datasets are Created From a Zip File### When you create a dataset from a .zip file, the API uses the structure of the .zip file. - The dataset name is the name of the .zip file minus the file extension. - A label is created for each directory that contains images, and the label name is the same as the directory name. - An example is created for each image, and the example name is the same as the file name. The .zip file can be on a local drive or accessible from a cloud location that doesn’t require authentication. You can add images to a dataset created from a .zip file only by using the [Create Examples From a Zip File](doc:create-examples-from-zip) call. ##[Create an Empty Dataset](doc:create-a-dataset)## This call creates an empty dataset and labels, if you pass in the optional labels parameter. To create examples in a dataset that you create using this call, you must use the [Create an Example](doc:create-an-example) call to add the images individually. After you create this type of dataset, you can only add images individually, you can’t add examples to the dataset from a .zip file. Therefore, we recommend that you create a dataset from a .zip file using either the asynchronous or synchronous call, depending on the amount of data.
{"__v":0,"_id":"581b94a7bdfa410f0087c27a","api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"body":"## Code Samples ##\n\n- [salesforce-einstein-vision-apex](https://github.com/muenzpraeger/salesforce-einstein-vision-apex)—GitHub repo that contains code for an Apex-based wrapper for the Einstein Vision API.\n\n- [salesforce-einstein-vision-java](https://github.com/muenzpraeger/salesforce-einstein-vision-java)—GitHub repo that contains code for a Java-based wrapper for the Einstein Vision API.\n\n- [salesforce-einstein-vision-swift](https://github.com/muenzpraeger/salesforce-einstein-vision-swift)—GitHub repo that contains code for a Swift-based wrapper for the Einstein Vision API.\n\n## Trailhead ##\n\n- [Quick Start: Einstein Vision](https://trailhead.salesforce.com/projects/predictive_vision_apex)—Step through using Apex to create a simple app that calls the Einstein Vision API to recognize and classify images.\n\n- [AI Basics](https://trailhead.salesforce.com/en/module/ai_basics)—Learn what AI is and how it will transform CRM and the customer experience.\n\n- [Salesforce Einstein Features](https://trailhead.salesforce.com/en/module/get_smart_einstein_feat)—Discover insights and predict outcomes with this powerful set of AI-enhanced features.","category":"581b94229c78ac0f005a4c52","createdAt":"2016-11-03T19:48:55.501Z","excerpt":"","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","next":{"pages":[],"description":""},"order":0,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"code-samples-and-learning-resources","sync_unique":"","title":"Code Samples and Learning Resources","type":"basic","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

Code Samples and Learning Resources


## Code Samples ## - [salesforce-einstein-vision-apex](https://github.com/muenzpraeger/salesforce-einstein-vision-apex)—GitHub repo that contains code for an Apex-based wrapper for the Einstein Vision API. - [salesforce-einstein-vision-java](https://github.com/muenzpraeger/salesforce-einstein-vision-java)—GitHub repo that contains code for a Java-based wrapper for the Einstein Vision API. - [salesforce-einstein-vision-swift](https://github.com/muenzpraeger/salesforce-einstein-vision-swift)—GitHub repo that contains code for a Swift-based wrapper for the Einstein Vision API. ## Trailhead ## - [Quick Start: Einstein Vision](https://trailhead.salesforce.com/projects/predictive_vision_apex)—Step through using Apex to create a simple app that calls the Einstein Vision API to recognize and classify images. - [AI Basics](https://trailhead.salesforce.com/en/module/ai_basics)—Learn what AI is and how it will transform CRM and the customer experience. - [Salesforce Einstein Features](https://trailhead.salesforce.com/en/module/get_smart_einstein_feat)—Discover insights and predict outcomes with this powerful set of AI-enhanced features.
## Code Samples ## - [salesforce-einstein-vision-apex](https://github.com/muenzpraeger/salesforce-einstein-vision-apex)—GitHub repo that contains code for an Apex-based wrapper for the Einstein Vision API. - [salesforce-einstein-vision-java](https://github.com/muenzpraeger/salesforce-einstein-vision-java)—GitHub repo that contains code for a Java-based wrapper for the Einstein Vision API. - [salesforce-einstein-vision-swift](https://github.com/muenzpraeger/salesforce-einstein-vision-swift)—GitHub repo that contains code for a Swift-based wrapper for the Einstein Vision API. ## Trailhead ## - [Quick Start: Einstein Vision](https://trailhead.salesforce.com/projects/predictive_vision_apex)—Step through using Apex to create a simple app that calls the Einstein Vision API to recognize and classify images. - [AI Basics](https://trailhead.salesforce.com/en/module/ai_basics)—Learn what AI is and how it will transform CRM and the customer experience. - [Salesforce Einstein Features](https://trailhead.salesforce.com/en/module/get_smart_einstein_feat)—Discover insights and predict outcomes with this powerful set of AI-enhanced features.
{"__v":0,"_id":"58c05e8c3eee111b00a8b0ec","api":{"settings":"","results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"auth":"required","params":[],"url":""},"body":"##Check the Troubleshooting Resources##\n- [API Error Codes and Messages](doc:api-error-codes-and-messages) \n\n- [KB article Einstein Vision - Common Issues](https://help.salesforce.com/articleView?id=Einstein-Vision-Common-Issues&language=en_US&type=10)\n\n- [KB article Einstein Vision Support FAQs](https://help.salesforce.com/articleView?id=Einstein-Vision-Support-FAQs&language=en_US&type=1)\n\n- [Troubleshooting Page](page:troubleshooting) \n\n\n##Ask a Question on the Developer Forums##\nIf you need help working with the Einstein Vision API, go to the [Einstein Platform developer forum](https://developer.salesforce.com/forums?communityId=09aF00000004HMGIA2#!/feedtype=RECENT&dc=Predictive_Services&criteria=ALLQUESTIONS) on Salesforce Developers.","category":"581b94229c78ac0f005a4c52","createdAt":"2017-03-08T19:42:04.600Z","excerpt":"","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","next":{"pages":[],"description":""},"order":1,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"getting-help","sync_unique":"","title":"Support","type":"basic","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

Support


##Check the Troubleshooting Resources## - [API Error Codes and Messages](doc:api-error-codes-and-messages) - [KB article Einstein Vision - Common Issues](https://help.salesforce.com/articleView?id=Einstein-Vision-Common-Issues&language=en_US&type=10) - [KB article Einstein Vision Support FAQs](https://help.salesforce.com/articleView?id=Einstein-Vision-Support-FAQs&language=en_US&type=1) - [Troubleshooting Page](page:troubleshooting) ##Ask a Question on the Developer Forums## If you need help working with the Einstein Vision API, go to the [Einstein Platform developer forum](https://developer.salesforce.com/forums?communityId=09aF00000004HMGIA2#!/feedtype=RECENT&dc=Predictive_Services&criteria=ALLQUESTIONS) on Salesforce Developers.
##Check the Troubleshooting Resources## - [API Error Codes and Messages](doc:api-error-codes-and-messages) - [KB article Einstein Vision - Common Issues](https://help.salesforce.com/articleView?id=Einstein-Vision-Common-Issues&language=en_US&type=10) - [KB article Einstein Vision Support FAQs](https://help.salesforce.com/articleView?id=Einstein-Vision-Support-FAQs&language=en_US&type=1) - [Troubleshooting Page](page:troubleshooting) ##Ask a Question on the Developer Forums## If you need help working with the Einstein Vision API, go to the [Einstein Platform developer forum](https://developer.salesforce.com/forums?communityId=09aF00000004HMGIA2#!/feedtype=RECENT&dc=Predictive_Services&criteria=ALLQUESTIONS) on Salesforce Developers.
{"__v":0,"_id":"58d19189f05f313900518e56","api":{"results":{"codes":[{"status":200,"language":"json","code":"{}","name":""},{"status":400,"language":"json","code":"{}","name":""}]},"settings":"","auth":"required","params":[],"url":""},"body":"Einstein Vision provides two tiers of usage: free and paid. Each tier gives you a specific number of prediction calls. These limits apply only to predictions. A prediction is any POST call to `/vision/predict` to pass in an image and receive a prediction. A prediction call includes both predictions made from custom classifiers and predictions made from the pre-built classifiers.\n\n##Free Tier##\nWhen you sign up for an Einstein Platform account, you get 1,000 free predictions each calendar month. You get 1,000 predictions on the first of every month to be used by the last day of the month.\n\nWhen you exceed the maximum number of predictions for the current calendar month, you receive an error message when you call the `/predict` resource.\n\n##Paid Tier##\nIf you need more predictions than are available in the free tier, you can purchase them. Purchased predictions are also provisioned on a calendar month basis.\n\n- **Heroku**—If you signed up for an account using the Heroku add-on, you can purchase more add-on credits. See the Einstein Vision [add-on page](https://elements.heroku.com/addons/einstein-vision) for more information about plans and pricing.\n\n- **Salesforce**—If you signed up for an account using Salesforce, contact your account executive to purchase predictions for Einstein Vision. To add predictions to an existing account, you must provide the organization ID for your Einstein Platform account. To get your Einstein Platform organization ID, call the usage API. See [GET API Usage](doc:get-api-usage).\n\nIf you’re not sure what method you used to sign up or if you have any questions, post on the Einstein Platform [developer forum](https://developer.salesforce.com/forums?communityId=09aF00000004HMGIA2#!/feedtype=RECENT&dc=Predictive_Services&criteria=ALLQUESTIONS).\n\n\n#Monitor Usage#\n\nThere are two ways you can get your prediction usage information from the Einstein Vision API.\n- Response headers—headers returned by a call to the `/predict` resource that contain basic rate limit information. These headers give you a way to monitor your prediction usage as prediction calls are made.\n- The `/apiusage` resource—contains detailed information about your limits and prediction usage.\n\n##Rate Limit Headers##\nAny time you make an API call to the `/predict` resource, rate limit information is returned in the header. The rate limit headers specify your prediction usage for the current calendar month only. See [Prediction with Image Base64 String](doc:prediction-with-image-base64-string), [Prediction with Image File](doc:prediction-with-image-file), or [Prediction with Image URL](doc:prediction-with-image-url).\n\n##Prediction Usage API##\nIf you want to see detailed plan information, make an explicit call to the `/apiusage` resource. See [GET API Usage](doc:get-api-usage).","category":"581b94229c78ac0f005a4c52","createdAt":"2017-03-21T20:48:09.262Z","excerpt":"You can call the Einstein Vision API as much as you need to create datasets, add examples, and train models. However, there are limits to the number of prediction calls you can make.","githubsync":"","hidden":false,"isReference":false,"link_external":false,"link_url":"","next":{"pages":[],"description":""},"order":2,"parentDoc":null,"project":"552d474ea86ee20d00780cd7","slug":"rate-limits","sync_unique":"","title":"Rate Limits","type":"basic","updates":[],"user":"573b5a1f37fcf72000a2e683","version":"57c765bda54f9c0e00cec388","childrenPages":[]}

Rate Limits

You can call the Einstein Vision API as much as you need to create datasets, add examples, and train models. However, there are limits to the number of prediction calls you can make.

Einstein Vision provides two tiers of usage: free and paid. Each tier gives you a specific number of prediction calls. These limits apply only to predictions. A prediction is any POST call to `/vision/predict` to pass in an image and receive a prediction. A prediction call includes both predictions made from custom classifiers and predictions made from the pre-built classifiers. ##Free Tier## When you sign up for an Einstein Platform account, you get 1,000 free predictions each calendar month. You get 1,000 predictions on the first of every month to be used by the last day of the month. When you exceed the maximum number of predictions for the current calendar month, you receive an error message when you call the `/predict` resource. ##Paid Tier## If you need more predictions than are available in the free tier, you can purchase them. Purchased predictions are also provisioned on a calendar month basis. - **Heroku**—If you signed up for an account using the Heroku add-on, you can purchase more add-on credits. See the Einstein Vision [add-on page](https://elements.heroku.com/addons/einstein-vision) for more information about plans and pricing. - **Salesforce**—If you signed up for an account using Salesforce, contact your account executive to purchase predictions for Einstein Vision. To add predictions to an existing account, you must provide the organization ID for your Einstein Platform account. To get your Einstein Platform organization ID, call the usage API. See [GET API Usage](doc:get-api-usage). If you’re not sure what method you used to sign up or if you have any questions, post on the Einstein Platform [developer forum](https://developer.salesforce.com/forums?communityId=09aF00000004HMGIA2#!/feedtype=RECENT&dc=Predictive_Services&criteria=ALLQUESTIONS). #Monitor Usage# There are two ways you can get your prediction usage information from the Einstein Vision API. - Response headers—headers returned by a call to the `/predict` resource that contain basic rate limit information. These headers give you a way to monitor your prediction usage as prediction calls are made. - The `/apiusage` resource—contains detailed information about your limits and prediction usage. ##Rate Limit Headers## Any time you make an API call to the `/predict` resource, rate limit information is returned in the header. The rate limit headers specify your prediction usage for the current calendar month only. See [Prediction with Image Base64 String](doc:prediction-with-image-base64-string), [Prediction with Image File](doc:prediction-with-image-file), or [Prediction with Image URL](doc:prediction-with-image-url). ##Prediction Usage API## If you want to see detailed plan information, make an explicit call to the `/apiusage` resource. See [GET API Usage](doc:get-api-usage).
Einstein Vision provides two tiers of usage: free and paid. Each tier gives you a specific number of prediction calls. These limits apply only to predictions. A prediction is any POST call to `/vision/predict` to pass in an image and receive a prediction. A prediction call includes both predictions made from custom classifiers and predictions made from the pre-built classifiers. ##Free Tier## When you sign up for an Einstein Platform account, you get 1,000 free predictions each calendar month. You get 1,000 predictions on the first of every month to be used by the last day of the month. When you exceed the maximum number of predictions for the current calendar month, you receive an error message when you call the `/predict` resource. ##Paid Tier## If you need more predictions than are available in the free tier, you can purchase them. Purchased predictions are also provisioned on a calendar month basis. - **Heroku**—If you signed up for an account using the Heroku add-on, you can purchase more add-on credits. See the Einstein Vision [add-on page](https://elements.heroku.com/addons/einstein-vision) for more information about plans and pricing. - **Salesforce**—If you signed up for an account using Salesforce, contact your account executive to purchase predictions for Einstein Vision. To add predictions to an existing account, you must provide the organization ID for your Einstein Platform account. To get your Einstein Platform organization ID, call the usage API. See [GET API Usage](doc:get-api-usage). If you’re not sure what method you used to sign up or if you have any questions, post on the Einstein Platform [developer forum](https://developer.salesforce.com/forums?communityId=09aF00000004HMGIA2#!/feedtype=RECENT&dc=Predictive_Services&criteria=ALLQUESTIONS). #Monitor Usage# There are two ways you can get your prediction usage information from the Einstein Vision API. - Response headers—headers returned by a call to the `/predict` resource that contain basic rate limit information. These headers give you a way to monitor your prediction usage as prediction calls are made. - The `/apiusage` resource—contains detailed information about your limits and prediction usage. ##Rate Limit Headers## Any time you make an API call to the `/predict` resource, rate limit information is returned in the header. The rate limit headers specify your prediction usage for the current calendar month only. See [Prediction with Image Base64 String](doc:prediction-with-image-base64-string), [Prediction with Image File](doc:prediction-with-image-file), or [Prediction with Image URL](doc:prediction-with-image-url). ##Prediction Usage API## If you want to see detailed plan information, make an explicit call to the `/apiusage` resource. See [GET API Usage](doc:get-api-usage).