CS 5306 INFO 5306: Crowdsourcing and Human Computationhirsh/5306/lecture14.pdf · Human-Annotated...

23
CS 5306 INFO 5306: Crowdsourcing and Human Computation Lecture 14 10/19/17 Haym Hirsh

Transcript of CS 5306 INFO 5306: Crowdsourcing and Human Computationhirsh/5306/lecture14.pdf · Human-Annotated...

Page 1: CS 5306 INFO 5306: Crowdsourcing and Human Computationhirsh/5306/lecture14.pdf · Human-Annotated Data •November 2005: Amazon Mechanical Turk May 2006: “AI gets a brain”, Barr

CS 5306INFO 5306:

Crowdsourcing andHuman Computation

Lecture 1410/19/17

Haym Hirsh

Page 2: CS 5306 INFO 5306: Crowdsourcing and Human Computationhirsh/5306/lecture14.pdf · Human-Annotated Data •November 2005: Amazon Mechanical Turk May 2006: “AI gets a brain”, Barr

Extra Credit Opportunities

• 10/19/17 4:15pm, Gates G01, Tom Kalil

• 10/26/17 4:15pm, Gates G01, Tapan Parikh, Cornell Tech

• 11/16/17 4:15pm, Gates G01, James Grimmelmann, Cornell Law

• 12/1/17 12:15pm, Gates G01, Sudeep Bhatia, Penn Psychology

Page 3: CS 5306 INFO 5306: Crowdsourcing and Human Computationhirsh/5306/lecture14.pdf · Human-Annotated Data •November 2005: Amazon Mechanical Turk May 2006: “AI gets a brain”, Barr

AI Successes

Page 4: CS 5306 INFO 5306: Crowdsourcing and Human Computationhirsh/5306/lecture14.pdf · Human-Annotated Data •November 2005: Amazon Mechanical Turk May 2006: “AI gets a brain”, Barr

AI Successes

Machine Learning

Page 5: CS 5306 INFO 5306: Crowdsourcing and Human Computationhirsh/5306/lecture14.pdf · Human-Annotated Data •November 2005: Amazon Mechanical Turk May 2006: “AI gets a brain”, Barr

AI Successes

Machine Learning

Human Annotated Data

Page 6: CS 5306 INFO 5306: Crowdsourcing and Human Computationhirsh/5306/lecture14.pdf · Human-Annotated Data •November 2005: Amazon Mechanical Turk May 2006: “AI gets a brain”, Barr
Page 7: CS 5306 INFO 5306: Crowdsourcing and Human Computationhirsh/5306/lecture14.pdf · Human-Annotated Data •November 2005: Amazon Mechanical Turk May 2006: “AI gets a brain”, Barr

AI Successes

Machine Learning

Human Annotated Data

Page 8: CS 5306 INFO 5306: Crowdsourcing and Human Computationhirsh/5306/lecture14.pdf · Human-Annotated Data •November 2005: Amazon Mechanical Turk May 2006: “AI gets a brain”, Barr

Human-Annotated Data

• November 2005: Amazon Mechanical TurkMay 2006: “AI gets a brain”, Barr J, Cabrera LF. Queue.

Page 9: CS 5306 INFO 5306: Crowdsourcing and Human Computationhirsh/5306/lecture14.pdf · Human-Annotated Data •November 2005: Amazon Mechanical Turk May 2006: “AI gets a brain”, Barr

Human-Annotated Data

• November 2005: Amazon Mechanical TurkMay 2006: “AI gets a brain”, Barr J, Cabrera LF. Queue.

Human data annotation cited as an example use

Page 10: CS 5306 INFO 5306: Crowdsourcing and Human Computationhirsh/5306/lecture14.pdf · Human-Annotated Data •November 2005: Amazon Mechanical Turk May 2006: “AI gets a brain”, Barr

Human-Annotated Data

• November 2005: Amazon Mechanical TurkMay 2006: “AI gets a brain”, Barr J, Cabrera LF. Queue.

• April 2008: “Crowdsourcing user studies with Mechanical Turk”, Kittur A, Chi EH, Suh B. In Proceedings of the SIGCHI conference on human factors in computing systems.

AMT for human subjects in HCI

Page 11: CS 5306 INFO 5306: Crowdsourcing and Human Computationhirsh/5306/lecture14.pdf · Human-Annotated Data •November 2005: Amazon Mechanical Turk May 2006: “AI gets a brain”, Barr

Human-Annotated Data

• November 2005: Amazon Mechanical TurkMay 2006: “AI gets a brain”, Barr J, Cabrera LF. Queue.

• April 2008: “Crowdsourcing user studies with Mechanical Turk”, Kittur A, Chi EH, Suh B. In Proceedings of the SIGCHI conference on human factors in computing systems.

• June 2008: “Utility data annotation with Amazon Mechanical Turk”, Sorokin A, Forsyth D. In Computer Vision and Pattern Recognition Workshops, 2008. CVPRW'08.

Image annotation

Page 12: CS 5306 INFO 5306: Crowdsourcing and Human Computationhirsh/5306/lecture14.pdf · Human-Annotated Data •November 2005: Amazon Mechanical Turk May 2006: “AI gets a brain”, Barr

Human-Annotated Data

• November 2005: Amazon Mechanical TurkMay 2006: “AI gets a brain”, Barr J, Cabrera LF. Queue.

• April 2008: “Crowdsourcing user studies with Mechanical Turk”, Kittur A, Chi EH, Suh B. In Proceedings of the SIGCHI conference on human factors in computing systems.

• June 2008: “Utility data annotation with Amazon Mechanical Turk”, Sorokin A, Forsyth D. In Computer Vision and Pattern Recognition Workshops, 2008. CVPRW'08.

• August 2008: “Get another label? Improving data quality and data mining using multiple, noisy labelers”, Sheng VS, Provost F, Ipeirotis PG. In Proceedings of the 14th ACM SIGKDD international conference on knowledge discovery and data mining.

Be smart in using human annotators

Page 13: CS 5306 INFO 5306: Crowdsourcing and Human Computationhirsh/5306/lecture14.pdf · Human-Annotated Data •November 2005: Amazon Mechanical Turk May 2006: “AI gets a brain”, Barr
Page 14: CS 5306 INFO 5306: Crowdsourcing and Human Computationhirsh/5306/lecture14.pdf · Human-Annotated Data •November 2005: Amazon Mechanical Turk May 2006: “AI gets a brain”, Barr

Human-Annotated Data

• November 2005: Amazon Mechanical TurkMay 2006: “AI gets a brain”, Barr J, Cabrera LF. Queue.

• April 2008: “Crowdsourcing user studies with Mechanical Turk”, Kittur A, Chi EH, Suh B. In Proceedings of the SIGCHI conference on human factors in computing systems.

• June 2008: “Utility data annotation with Amazon Mechanical Turk”, Sorokin A, Forsyth D. In Computer Vision and Pattern Recognition Workshops, 2008. CVPRW'08.

• August 2008: “Get another label? Improving data quality and data mining using multiple, noisy labelers”, Sheng VS, Provost F, Ipeirotis PG. In Proceedings of the 14th ACM SIGKDD international conference on knowledge discovery and data mining.

Be smart in using human annotators

Page 15: CS 5306 INFO 5306: Crowdsourcing and Human Computationhirsh/5306/lecture14.pdf · Human-Annotated Data •November 2005: Amazon Mechanical Turk May 2006: “AI gets a brain”, Barr

Human-Annotated Data

• November 2005: Amazon Mechanical TurkMay 2006: “AI gets a brain”, Barr J, Cabrera LF. Queue.

• April 2008: “Crowdsourcing user studies with Mechanical Turk”, Kittur A, Chi EH, Suh B. In Proceedings of the SIGCHI conference on human factors in computing systems.

• June 2008: “Utility data annotation with Amazon Mechanical Turk”, Sorokin A, Forsyth D. In Computer Vision and Pattern Recognition Workshops, 2008. CVPRW'08.

• August 2008: “Get another label? Improving data quality and data mining using multiple, noisy labelers”, Sheng VS, Provost F, Ipeirotis PG. In Proceedings of the 14th ACM SIGKDD international conference on knowledge discovery and data mining.

• October 2008: “Cheap and fast---but is it good?: evaluating non-expert annotations for natural language tasks”, Snow R, O'Connor B, Jurafsky D, Ng AY. In Proceedings of the conference on empirical methods in natural language processing.

Page 16: CS 5306 INFO 5306: Crowdsourcing and Human Computationhirsh/5306/lecture14.pdf · Human-Annotated Data •November 2005: Amazon Mechanical Turk May 2006: “AI gets a brain”, Barr

Human-Annotated Data• November 2005: Amazon Mechanical Turk

May 2006: “AI gets a brain”, Barr J, Cabrera LF. Queue.

• April 2008: “Crowdsourcing user studies with Mechanical Turk”, Kittur A, Chi EH, Suh B. In Proceedings of the SIGCHI conference on human factors in computing systems.

• June 2008: “Utility data annotation with Amazon Mechanical Turk”, Sorokin A, Forsyth D. In Computer Vision and Pattern Recognition Workshops, 2008. CVPRW'08.

• August 2008: “Get another label? Improving data quality and data mining using multiple, noisy labelers”, Sheng VS, Provost F, Ipeirotis PG. In Proceedings of the 14th ACM SIGKDD international conference on knowledge discovery and data mining.

• October 2008: “Cheap and fast---but is it good?: evaluating non-expert annotations for natural language tasks”, Snow R, O'Connor B, Jurafsky D, Ng AY. In Proceedings of the conference on empirical methods in natural language processing.

Page 17: CS 5306 INFO 5306: Crowdsourcing and Human Computationhirsh/5306/lecture14.pdf · Human-Annotated Data •November 2005: Amazon Mechanical Turk May 2006: “AI gets a brain”, Barr

Human-Annotated Data• November 2005: Amazon Mechanical Turk

May 2006: “AI gets a brain”, Barr J, Cabrera LF. Queue.

• April 2008: “Crowdsourcing user studies with Mechanical Turk”, Kittur A, Chi EH, Suh B. In Proceedings of the SIGCHI conference on human factors in computing systems.

• June 2008: “Utility data annotation with Amazon Mechanical Turk”, Sorokin A, Forsyth D. In Computer Vision and Pattern Recognition Workshops, 2008. CVPRW'08.

• August 2008: “Get another label? Improving data quality and data mining using multiple, noisy labelers”, Sheng VS, Provost F, Ipeirotis PG. In Proceedings of the 14th ACM SIGKDD international conference on knowledge discovery and data mining.

• October 2008: “Cheap and fast---but is it good?: evaluating non-expert annotations for natural language tasks”, Snow R, O'Connor B, Jurafsky D, Ng AY. In Proceedings of the conference on empirical methods in natural language processing.

Human language annotation

4 non-experts = 1 expert

Page 18: CS 5306 INFO 5306: Crowdsourcing and Human Computationhirsh/5306/lecture14.pdf · Human-Annotated Data •November 2005: Amazon Mechanical Turk May 2006: “AI gets a brain”, Barr

ImageNet

• June 2009: “Imagenet: A large-scale hierarchical image database”, Deng J, Dong W, Socher R, Li LJ, Li K, Fei-Fei L. In Computer Vision and Pattern Recognition, 2009. CVPR 2009.

Page 19: CS 5306 INFO 5306: Crowdsourcing and Human Computationhirsh/5306/lecture14.pdf · Human-Annotated Data •November 2005: Amazon Mechanical Turk May 2006: “AI gets a brain”, Barr

Cornell: OpenSurfaces

Page 20: CS 5306 INFO 5306: Crowdsourcing and Human Computationhirsh/5306/lecture14.pdf · Human-Annotated Data •November 2005: Amazon Mechanical Turk May 2006: “AI gets a brain”, Barr

The Data Quality Problem

• You don’t know if the annotations are correct• Requesters often blame workers (bots, spamming, …)• Workers often blame requesters (poorly written task descriptions, …)

• Approaches:• Presume you want to find the “wrong” answers and workers

• Downvote the inaccurate people:• “Maximum likelihood estimation of observer error-rates using the EM algorithm”, Dawid AP, Skene

AM. Applied statistics. 1979 Jan 1:20-8.

• Provide the right incentives:• “Mechanisms for making crowds truthful”. Jurca R, Faltings B. Journal of Artificial Intelligence

Research. 2009 Jan 1;34(1):209.

• Majority vote

• Task design to avoid bad work• Training• Feedback

Page 21: CS 5306 INFO 5306: Crowdsourcing and Human Computationhirsh/5306/lecture14.pdf · Human-Annotated Data •November 2005: Amazon Mechanical Turk May 2006: “AI gets a brain”, Barr

"Revolt: Collaborative Crowdsourcing for Labeling Machine Learning Datasets", Chang JC, Amershi S, and Kamar E. In

Proceedings of the 2017 CHI Conference on Human Factors.

Vote – Explain – Categorize

“Collaboration” takes place during the Categorize stage

Define new categories that propagate to other workers in real time

Page 22: CS 5306 INFO 5306: Crowdsourcing and Human Computationhirsh/5306/lecture14.pdf · Human-Annotated Data •November 2005: Amazon Mechanical Turk May 2006: “AI gets a brain”, Barr

"CrowdVerge: Predicting If People Will Agree on the Answer to a Visual Question", Gurari, D. and Grauman, K., In Proceedings of the 2017 CHI

Conference on Human Factors in Computing Systems.

• Many projects get a fixed number of workers for each annotation –wastes money

• (Compare to computing an AND function – you stop when you know the answer)

• (GalaxyZoo gets a variable number of labels based on annotator disagreement)

• (Compare to VoxPL – made the judgment on a statistical basis)

• Idea here: Learn to predict whether people will agree

Page 23: CS 5306 INFO 5306: Crowdsourcing and Human Computationhirsh/5306/lecture14.pdf · Human-Annotated Data •November 2005: Amazon Mechanical Turk May 2006: “AI gets a brain”, Barr

"CrowdVerge: Predicting If People Will Agree on the Answer to a Visual Question", Gurari, D. and Grauman, K., In Proceedings of the 2017 CHI

Conference on Human Factors in Computing Systems.

• Why do people disagree (not assuming malevolent workers!) on an annotation?• Crowd worker skill

• Expertise

• “crowd worker may inadequately answer a seemingly simple question”

• Ambiguity in question and visual content

• Insufficient visual evidence

• Subjective questions

• Synonymous answers

• Varying levels of answer granularity