Hands-On AI Part 8: Crowdsourcing Word Selection for Image Search

Published: 09/18/2017   Last Updated: 09/18/2017

A Tutorial Series for Software Developers, Data Scientists, and Data Center Managers

In the previous article, we introduced Amazon Mechanical Turk* (MTurk) crowdsourcing marketplace and described techniques to help improve the effectiveness and efficiency of crowdsourcing pipelines. In this article, we will demonstrate how to apply crowdsourcing in a real pipeline.

One of the components of the app we are building is emotion recognition from images. As input, it takes an image; as output, it produces a class label with the most probable emotion depicted on the image. To develop such a component and train a machine learning model for emotion recognition, we need a large labelled set of images. The images need to be collected and then annotated. However, doing it at scale is a serious challenge. Moreover, the dataset should reflect the emotional associations between the content of images and emotions for a wide population of users rather than contain biased associations of a small group of researchers or developers working on the app. Crowdsourcing shines at these challenges as it allows to scale human intelligence (HI) tasks to many workers.

For illustrative purposes, in this article, we will focus only on the first step of the image dataset preparation process: image selection and collection (data annotation can be configured in the same manner). Given an emotion, we need to find images matching this emotion. We will represent each emotion via a set of emotional keywords. Then, we will use these keywords as input to the Flickr* image search API and fetch relevant images. Crowdsourcing will help us select keywords representative of each emotion.

This article describes how crowdsourcing could be used for word selection for image search end-to-end. It is especially useful if you have to prepare a data set for your AI project yourself or want to learn how to use crowdsourcing in your project. We also provide links to the emotion-word association data set that could be used to search for raw images on Flickr (covered in another article of this tutorial series).

Step-by-Step Process

  1. Create an MTurk account as a requester.

      a. Go to the Amazon Mechanical Turk home page.

      b. Click Sign in as a Requester in the top right corner.

      c. Enter your Amazon.com credentials.

      d. Add your credit card information.

      e. Refill your balance to pay for crowdsourcing.

  2. Select a HIT template for the survey.

  3. Create the HIT Group

      a. In the Title field, name the HIT Group. Use naming heuristics to make your HITs more discoverable, for example, “!!!Word Associations” or “ZZZ Word Associations”.

      b. Provide a short yet informative Description.

      c. Provide relevant Keywords. Workers like to do survey tasks and if your task is similar to a survey, we recommend adding it to the tags/keywords and description.

      d. Specify Reward per assignment of $0.01. Since in our case we will have only one question, it is an acceptable value.

      e. Type the Number of assignments per HIT. Since we want to accumulate words from many workers and count the most frequent words for each emotion, we will set this value to 25.

      f. Keep the default Time allotted per assignment of 1 hour.

      g. In the HIT expires in box, set the deadline to 1 day. By this time all HITs from your HIT Group must be done. After that, your HITs will disappear automatically.

      h. Keep the default Auto-approve and pay Workers in selection of 8 hours. Since our HIT is simple (provide keywords associated with an emotion), we don’t do any results post-filtering or serious quality control. All submitted results will be accepted.

      i. For Require that Workers be Masters to do your HITs, select No. All Workers can view and work on our HITs. Note that Master Workers cost slightly more.

  4. Design the user interface and layout, and then define your HIT Group.

      a. Provide the instructions.

      b. Remove from the page template all questions except for one with the textual input. To do this, edit the page in WYSIWYG mode or edit the HTML source. In this case, you can click the corresponding element of the editor in the top-right corner. You can access the complete HTML code from Dropbox*.

      c. Make the most important information bold, such as the name of the emotion class. We have a HIT Group for each emotion.

      d. Click the Preview button.

  5. Publish a batch of HITs.

      a. Select the HIT Group for this project from the list of all your HITs Groups or Templates.

      b. Review the payment information (MTurk adds a fee on top of the payments to the workers as part of the service agreement).

  6. Wait for the results.

  7. Review the results and download the .csv file for processing.

      a. Click Results next to the HIT Group you’re interested in.

      b. Review the keywords submitted by the individual workers.

      c. Click Download CSV to save a local copy of the results.

  8. Normalize the submitted keywords.

      a. Perform the word count operation and sort words alphabetically (no normalization is needed at this stage). You will see about 80‒100 unique words per emotion class.

      b. Scan the list and merge semantically related words. For example, map to the canonical form “cats” and “cat”. This can be done manually or using a stemmer or morphological analyzer. Since the size of the data set is small in this project, the task was done manually.

      c. Sort the deduped and merged word list by frequency. For example, “dog: 10”, “cat: 5”, “car: 3” and then take the top words based on some criteria. In this project, the absolute frequency above 2 was used. However, you can also do top-k% or top-k words with the highest ranks. Using such a cut-off, we got about 40 words per emotion class.The normalized keywords for each emotion class are provided on Drobox. These keywords are used to fetch images from Flickr*.

Conclusion

In this article, we presented an example on how to use MTurk in a real crowdsourced data annotation pipeline motivated by the movie-making project (image classification subtask) and shared the materials and the final data set with the community. Despite the fact that eventually we had to use an existing collection in this project, data annotation is an important topic in every AI project. Therefore, in the next article we will cover alternative ways of data annotation.

References

 

Prev: Augment AI with Human Intelligence Using Amazon Mechanical Turk* Next: Data Annotation Techniques*

View All Tutorials ›

Product and Performance Information

1

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.