Schedule

  • Related Links

  • View the list of all Accepted Papers and the Full Paper Proceedings.
  • See the Program page for details of the Keynote Talks and David B. Martin memorial.
  • See the Attend page for local and venue information.

Sunday, October 30, 2016

9:30am-6:00pm Doctoral Consortium (by invitation, at UT Austin's iSchool). See the detailed agenda.
3:00pm-6:00pm Tutorial: Crowdsourced Data Processing: Industry & Academic Perspectives (at UT Austin's iSchool)
6:00pm Sponsors Dinner (by invitation)

Monday, October 31, 2016

8:45am-9:00am Chairs' Welcome (see slides)
9:00am-10:00am
Keynote Talk: Iyad Rahwan (MIT)
10:00am-10:30am Break
10:30am-11:50am

FP1: Conversations & Vision


Session Chair: Jeff Nichols (Google)
Click Carving: Segmenting Objects in Video with Point Clicks Suyog Jain and Kristen Grauman (University of Texas at Austin)
"Is there anything else I can help you with?": Challenges in Deploying an On-Demand Crowd-Powered Conversational Agent Ting-Hao K. Huang (Carnegie Mellon University), Walter Lasecki (University of Michigan), Amos Azaria (Ariel University) and Jeffrey Bigham (Carnegie Mellon University)
CRQA: Crowd-powered Real-time Automated Question Answering System Denis Savenkov and Eugene Agichtein (Emory University)
Investigating the Influence of Data Familiarity to Improve the Design of a Crowdsourcing Image Annotation System Danna Gurari (University of Texas at Austin), Mehrnoosh Sameki (Boston University) and Margrit Betke (Boston University)
11:50am-1:20pm Lunch (see food and dining options)
1:20pm-2:20pm

FP2: Learning to be Efficient


Session Chair: Gianluca Demartini (University of Sheffield)
Learning and Feature Selection under Budget Constraints in Crowdsourcing Besmira Nushi, Adish Singla, Andreas Krause and Donald Kossmann (ETH Zurich)
Learning to scale payments in crowdsourcing with PropeRBoost Goran Radanovic and Boi Faltings (Ecole Polytechnique Fédérale de Lausanne)
Efficient techniques for crowdsourced top-k lists
Best Paper Finalist
Luca de Alfaro, Vassilis Polychronopoulos and Neoklis Polyzotis (University of California, Santa Cruz)
2:20pm-2:40pm Encore Talk 1 (Gary Hsieh, University of Washington)
You Get Who You Pay for: The Impact of Incentives on Participation Bias
Gary Hsieh and Rafal Kocielnik. In Proceedings CSCW 2016: the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing, pp. 823-835, 2016. Best Paper Award.

2:40pm-3:10pm Break
3:10pm-4:30pm

FP3: Education and Incentives


Session Chair: Jeff Bigham (Carnegie Mellon U.)
Practical Peer Prediction for Peer Assessment Victor Shnayder and David Parkes (Harvard University)
Predicting Crowd Work Quality under Monetary Interventions Ming Yin and Yiling Chen (Harvard University)
Crowdclass: Designing classification-based citizen science learning modules Doris Jung-Lin Lee, Joanne Lo, Moonhyok Kim and Eric Paulos (University of California, Berkeley)
Crowdsourcing Accurate and Creative Word Problems and Hints Yvonne Chen, Travis Mandel, Yun-En Liu and Zoran Popović (University of Washington)
4:30pm-5:10pm Industry Panel 1: When AI runs the show, what's left for crowd platforms?

Recent advances in artificial intelligence owe much of their success to the efforts of thousands of annotators on crowd marketplaces. The success of these platforms may be their own undoing. As more and more human computation tasks become solvable via artificial intelligence, existing systems will need to reinvent themselves to adapt to these new realities or die.

Real-world crowd labor platforms like Uber have already started replacing human drivers with artificial intelligence systems. In a world where sophisticated deep learning systems can handle most of the activities originally supporting crowd platforms, what work will be left for humans do in labor marketplaces? What choices should platform designers and owners make to protect their relationships with participants and ensure continued liquidity? What responsibility do platforms have to the annotators that made their models possible?

5:10pm-5:50pm Industry Panel 2: When Are Workers Inputs and When Are They Users?

Crowd application designers face multiple choices in how to interact with members of the crowd – choices that affect both the quality of work for an application's customers and the quality of life for individuals carrying out the work. Most discussion has historically sought to apply pressure to platforms to make good choices here.

As human computation systems utilize increasingly large groups around the world for more sophisticated work than before, what choices should application designers make in order to balance the need for effective outcomes with the needs of participants on these platforms? Are these choices at odds with early-stage applications? What should change as a working relationship is increasingly long-term and based on increasingly complex work? What research questions are left to understand best practices in application design from both perspectives?

5:50pm-6:00pm Announcement of Paper Awards

Presenter: Jeff Bigham (CMU), on behalf of HCOMP 2016's Senior Program Committee

6:00pm- 8:00pm Opening Reception (held at venue)

Creative Halloween costumes are encouraged (but completely optional). Austin's Lucy in Disguise offers a great local selection to choose from.

Tuesday, November 1, 2016

9:00am-10:00am Keynote Talk: Aashish Goel (Stanford)
10:00am-10:30am Break
10:30-11:50am

FP4: Collecting Data You Want


Session Chair: Dan Weld (U. of Washington)
Modeling Task Complexity in Crowdsourcing Jie Yang (Delft University of Technology), Judith Redi (Delft University of Technology), Gianluca Demartini (University of Sheffield) and Alessandro Bozzon (Delft University of Technology)
Studying the Effects of Task Notification Policies on Participation and Outcomes in On-the-go Crowdsourcing Yongsung Kim, Emily Harburg, Shana Azria, Aaron Shaw, Elizabeth Gerber, Darren Gergle and Haoqi Zhang (Northwestern University)
Extending Workers' Attention Span Through Dummy Events
Best Paper Finalist
Avshalom Elmalech (Bar-Ilan University), David Sarne (Bar-Ilan University), Esther David (Ashkelon College) and Chen Hajaj (Bar-Ilan University)
Much Ado About Time: Exhaustive Annotation of Temporal Data Gunnar Sigurdsson (Carnegie Mellon University), Olga Russakovsky (Carnegie Mellon University), Ivan Laptev (INRIA, France), Ali Farhadi (University of Washington) and Abhinav Gupta (Carnegie Mellon University)
11:50am-1:00pm Lunch (see food and dining options)
1:00pm-1:20pm Platinum Sponsor Talk (Matt Bencke, Co-Founder & CEO, Spare5)
Tips for Sourcing Better Training Data: Takeaways from contributing to the next generation of computer vision training data
  • Abstract. Spare5, a Training Data as a Service (TDaaS) provider, has partnered with COCO Consortium researchers to build the next generation of the COCO image recognition, segmentation, and captioning dataset for computer vision. We'll review the challenges in sourcing accurate annotations for the dataset, as well as how they were overcome with iterations in task design. Mr. Bencke will share learnings from the project and explain how they can be applied by attendees in similar training data initiatives.

  • Speaker Bio. Matt Bencke is the co-founder and CEO of Spare5, the world’s leading provider of training data for computer vision and natural language that enables machine learning and AI teams to train, validate, and test their algorithms and models.

1:20pm-2:40pm

FP5: Task Design for Better Crowdsourcing


Session Chair: Praveen Paritosh (Google)
Crowdsourcing Relevance Assessments: The Unexpected Benefits of Limiting the Time to Judge Eddy Maddalena (University of Udine), Marco Basaldella (University of Udine), Dario De Nart (University of Udine), Dante Degl'Innocenti (University of Udine), Stefano Mizzaro (University of Udine) and Gianluca Demartini (University of Sheffield)
MicroTalk: Using Argumentation to Improve Crowdsourcing Accuracy Ryan Drapeau, Lydia B. Chilton, Jonathan Bragg and Daniel S. Weld (University of Washington)
Why Is That Relevant? Collecting Annotator Rationales for Relevance Judgments
Best Paper Award
Tyler Mcdonnell (University of Texas at Austin), Matthew Lease (University of Texas at Austin), Mucahid Kutlu (Qatar University) and Tamer Elsayed (Qatar University)
Interactive Consensus Agreement Games For Labeling Images Paul Upchurch, Daniel Sedra, Andrew Mullen, Haym Hirsh and Kavita Bala (Cornell University)
2:40-3:00pm Encore Talk 2: (Shomir Wilson, University of Cincinnati)
Crowdsourcing annotations of websites' privacy policies: Can it really work?
Shomir Wilson, Florian Schaub, Rohan Ramanath, Norman Sadeh, Fei Liu, Noah Smith and Frederick Liu. In Proceedings of the 25th International World Wide Web (WWW) Conference, pp. 133-143, 2016. Best Paper Finalist.
3:00pm-3:30pm Poster/Demo Madness 1
3:30pm-4:50pm Poster/Demo Session 1
Demonstrations

Creating Interactive Behaviors in Early Sketch by Recording and Remixing Crowd Demonstrations

  • Sang Won Lee, Yi Wei Yang, Shiyan Yan, Yujin Zhang, Isabelle Wong, Zhengxi Tan, Miles McGruder, Christopher Homan, Walter S. Lasecki

Integrating Citizen Science with Online Learning to Ask Better Questions

  • Vineet Pandey, Scott Klemmer, Amnon Amir, Justine Debelius, Embriette R. Hyde, Tomasz Kosciolek, Rob Knight

Ashwin: Plug-and-Play System for Machine-Human Image Annotation

  • Anand Sriraman, Mandar Kulkarni, Rahul Kumar, Kanika Kalra, Purushotam Radadia, Shirish Karande
Works-in-Progress Posters

A Markov Chain Based Ensemble Method for Crowdsourced Clustering

  • Sujoy Chatterjee, Enakshi Kundu, Anirban Mukhopadhyay

Consensus of Dependent Opinions

  • Sujoy Chatterjee, Anirban Mukhopadhyay, Malay Bhattacharyya

Pairwise, Magnitude, or Stars: What's the Best Way for Crowds to Rate?

  • Alessandro Checco, Gianluca Demartini

Toward Crowdsourced User Studies for Software Evaluation

  • Florian Daniel, Pavel Kucherbaev

Feedback and Timing in a Crowdsourcing Game

  • Gili Freedman, Sukdith Punjasthitkul, Max Seidman, Mary Flanagan

A GWAP Approach to Analyzing Informal Algorithm for Collaborative Group Problem Solving

  • Tatsuki Furukawa, Hajime Mizuyama, Tomomi Nonaka

Learning Phrasal Lexicons for Robotic Commands Using Crowdsourcing

  • Njie Hu, Jean Oh, Anatole Gershman

Redesigning Product X as an Inversion-Problem Game

  • Yu Kagoshima, Hajime Mizuyama, Tomomi Nonaka

Extremest Extraction: Push-Button Learning of Novel Events

  • Christopher Lin, Mausam, Daniel Weld

The Effect of Class Imbalance and Order on Crowdsourced Relevance Judgments

  • Rehab K. Qarout, Alessandro Checco, Gianluca Demartini

Feasibility of Post-Editing Speech Transcriptions with a Mismatched Crowd

  • Purushotam Radadia, Shirish Karande

Video Summarization using Causality Graphs

  • Shay Sheinfeld, Yotam Gingold, Ariel Shamir

Crowdsourcing Information Extraction for Biomedical Systematic Reviews

  • Yalin Sun, Pengxiang Cheng, Shengwei Wang, Hao Lyu, Matthew Lease, Iain Marshall, Byron C. Wallace

Encore Track Posters

Hybrid Human-Machine Information Systems: Challenges and Opportunities

  • Gianluca Demartini

Scheduling Human Intelligence Tasks in Multi-Tenant Crowd-Powered Systems

  • Djellel Eddine Difallah, Gianluca Demartini, Philippe Cudre-Mauroux

Learning to Incentivize: Eliciting Effort via Output Agreement

  • Yang Liu, Yiling Chen

On The Relation between Assessor’s Agreement and Accuracy in Gamified Relevance Assessment

  • Olga Megorskaya, Vladimir Kukushkin, Pavel Serdyukov

Doctoral Consortium Posters

Crowd-Powered Conversational Agents

  • Ting-Hao Huang

Crowdsourcing to Reconnect: Enabling Online Contributions and Social Interactions for Older Adults

  • Francisco Ibarra

Application of the Dual-Process Theory to Debias Forecasts in Prediction Markets

  • Simon Kloker

Understanding Online Self-Organization and Distributed Problem Solving in Disaster

  • Marina Kogan

Quantifying, Understanding, and Mitigating Crowd Work Bias

  • Jacob Thebault-Spieker

Incentive Engineering in Crowdsourcing Systems

  • Nhat Truong

5:00pm-6:00pm Remembrance: David B. Martin and his Ethnographic Studies of Crowd Workers
Presenter: Benjamin Hanrahan (Penn State University)
6:30pm Buses depart for Offsite Reception
7:00pm-10:00pm Offsite Reception at Maggie Mae's on Sixth Street

Wednesday, November 2, 2016

9:00am-10:00am Keynote Talk: Nathan Schneider (U. of Colorado, Boulder)
10:00am-10:30am Break
10:30am-11:50am

FP6: Crowdsourcing Subjective Things


Session Chair: Jaime Teevan (Microsoft Research)
Learning Privacy Expectations by Crowdsourcing Contextual Informational Norms Yan Shvartzshnaider (New York University), Schrasing Tong (Princeton University) Thomas Wies (New York University), Paula Kift (New York University), Helen Nissenbaum (New York University), Lakshminarayanan Subramanian (New York University), and Prateek Mittal (Princeton University)
Leveraging the Contributions of the Casual Majority to Identify Appealing Web Content Tad Hogg (Institute for Molecular Manufacturing, USA) and Kristina Lerman (University of Southern California)
Evaluating Task-Dependent Taxonomies for Navigation Yuyin Sun (University of Washington), Adish Singla (ETH Zurich), Tori Yan (University of Washington), Andreas Krause (ETH Zurich) and Dieter Fox (University of Washington)
Validating the Quality of Crowdsourced Psychometric Personality Test Items Bao S. Loe (University of Cambridge), Francis Smart (Michigan State University), Lenka Fiřtová (University of Economics, Prague), Corinna Brauner (University of Münster), Laura Lüneborg (University of Bonn) and David Stillwell (University of Cambridge)
11:50am-1:10pm Lunch (see food and dining options)
1:10pm-1:30pm Encore Talk 3: (Joseph Chang, CMU)
Alloy: Clustering with Crowds and Computation
Joseph Chee Chang, Aniket Kittur, and Nathan Hahn. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp. 3180-3191. Best Paper Honorable Mention.
1:30pm-2:00pm Poster/Demo Madness 2
2:00pm-3:20pm Poster/Demo Session 2
Demonstrations

MmmTurkey: A Crowdsourcing Framework for Deploying Tasks and Recording Worker Behavior on Amazon Mechanical Turk

  • Brandon Dang, Miles Hutson, Matthew Lease

High Dimensional Human Guided Machine Learning

  • Eric Holloway, Robert Marks II

Reprowd: Crowdsourced Data Processing Made Reproducible

  • Ruochen Jiang, Jiannan Wang
Works-in-Progress Posters

Feature Based Task Recommendation in Crowdsourcing with Implicit Observations

  • Habibur Rahman, Lucas Joppa, Senjuti Basu Roy

Route Market: A Prediction-Market-Based Route Recommendation Service

  • Keisuke Beppu, Hajime Mizuyama, Tomomi Nonaka

Incentives for Truthful and Informative Post-Publication Peer Review

  • Luca de Alfaro, Marco Faella

Incentives for Truthful Evaluations

  • Luca de Alfaro, Marco Faella, Vassilis Polychronopoulos, and Michael Shavlovsky

Visual Questions: Predicting If a Crowd Will (Dis)Agree on the Answer

  • Danna Gurari, Kristen Grauman

Group Rotation Type Crowdsourcing

  • Katsumi Kumai (University of Tsukuba), Jianwei Zhang (Tsukuba University of Technology), Yuhki Shiraishi, Atsuyuki Morishima, Hiroyuki Kitagawa

Qualitative Framing of Financial Incentives – A Case of Emotion Annotation

  • Sephora Madjiheurem, Valentina Sintsova, Pearl Pu

Dynamic Allocation of Crowd Contributions for Sentiment Analysis during the 2016 U.S. Presidential Election

  • Mehrnoosh Sameki, Mattia Gentil, Kate K. Mays, Lei Guo, Margrit Betke

Exploring Required Work and Progress Feedback in Crowdsourced Mapping

  • Sofia Eleni Spatharioti, Becca Govoni, Jennifer Carrera, Sara Wylie, Seth Cooper

CoFE: A Collaborative Feature Engineering Framework for Data Science

  • Yoshihiko Suhara, Hideki Awashima, Hidekazu Oiwa, Alex Pentland

Incentive Engineering Framework for Crowdsourcing Systems

  • Nhat V.Q. Truong, Sebastian Stein, Long Tran-Thanh, Nicholas R. Jennings

Does Communication Help People Coordinate?

  • Yevgeniy Vorobeychik, Zlatko Joveski, Sixie Yu
Encore Track Posters

Predicting the Quality of User Contributions via LSTMs

  • Rakshit Agrawal, Luca De Alfaro

Toward a Learning Science for Complex Crowdsourcing Tasks

  • Shayan Doroudi, Ece Kamar, Emma Brunskill, Eric Horvitz

Crowd in C[loud] : Audience Participation Music with Online Dating Metaphor Using Cloud Service

  • Sang Won Lee, Antonio Deusany De Carvalho Junior, Georg Essl

Comparative Methods and Analysis for Creating High-Quality Question Sets from Crowdsourced Data

  • Sarah Luger, Jeff Bowles
Doctoral Consortium Posters

Understanding User Action and Behavior on Collaborative Platforms Using Machine Learning

  • Rakshit Agrawal

Self-Improving Crowdsourcing

  • Jonathan Bragg

How Crowdsensing and Crowdsourcing Change Collaborative Planning: Three explorations in transportation

  • Greg Griffin

Integrating Asynchronous Interaction into Real-Time Collaboration for Crowdsourced Creation

  • Sang Won Lee

Complex Systems and Society: What Are the Barriers to Automated Text Summarization of an Online Policy Deliberation?

  • Brian McInnis

Researching and Learning History via Crowdsourcing

  • Nai-Ching Wang
3:30pm-5:00pm

FP7: Quality Models


Session Chair: Adam Kalai (Microsoft Research New England)
Probabilistic Modeling for Crowdsourcing Partially-Subjective Ratings An T. Nguyen (University of Texas at Austin), Matthew Halpern (University of Texas at Austin), Byron C. Wallace (Northeastern University) and Matthew Lease (University of Texas at Austin)
State Detection using Adaptive Human Sensor Sampling Ioannis Boutsis (Athens University of Economics and Business), Vana Kalogeraki (Athens University of Economics and Business) and Dimitrios Gunopulos (University of Athens)
Quality Estimation of Workers in Collaborative Crowdsourcing Using Group Testing Prakhar Ojha and Partha Talukdar (Indian Institute of Science)
Understanding Crowdsourcing Workflow: Modeling and Optimizing Iterative and Parallel Processes Shinsuke Goto, Toru Ishida and Donghui Lin (Kyoto University)
5:00pm-6:00pm Business Meeting (see slides)
6:00pm-? CrowdCamp Social Evening (CrowdCamp participants only)

Thursday, November 3, 2016