9:30am-6:00pm | Doctoral Consortium (by invitation, at UT Austin's iSchool). See the detailed agenda. |
3:00pm-6:00pm | Tutorial: Crowdsourced Data Processing: Industry & Academic Perspectives (at UT Austin's iSchool) |
6:00pm | Sponsors Dinner (by invitation) |
8:45am-9:00am | Chairs' Welcome (see slides) | ||
9:00am-10:00am
|
Keynote Talk: Iyad Rahwan (MIT) | ||
10:00am-10:30am | Break | ||
10:30am-11:50am
FP1: Conversations & VisionSession Chair: Jeff Nichols (Google) |
Click Carving: Segmenting Objects in Video with Point Clicks | Suyog Jain and Kristen Grauman (University of Texas at Austin) | |
"Is there anything else I can help you with?": Challenges in Deploying an On-Demand Crowd-Powered Conversational Agent | Ting-Hao K. Huang (Carnegie Mellon University), Walter Lasecki (University of Michigan), Amos Azaria (Ariel University) and Jeffrey Bigham (Carnegie Mellon University) | ||
CRQA: Crowd-powered Real-time Automated Question Answering System | Denis Savenkov and Eugene Agichtein (Emory University) | ||
Investigating the Influence of Data Familiarity to Improve the Design of a Crowdsourcing Image Annotation System | Danna Gurari (University of Texas at Austin), Mehrnoosh Sameki (Boston University) and Margrit Betke (Boston University) | ||
11:50am-1:20pm | Lunch (see food and dining options) | ||
1:20pm-2:20pm
FP2: Learning to be EfficientSession Chair: Gianluca Demartini (University of Sheffield) |
Learning and Feature Selection under Budget Constraints in Crowdsourcing | Besmira Nushi, Adish Singla, Andreas Krause and Donald Kossmann (ETH Zurich) | |
Learning to scale payments in crowdsourcing with PropeRBoost | Goran Radanovic and Boi Faltings (Ecole Polytechnique Fédérale de Lausanne) | ||
Efficient techniques for crowdsourced top-k lists![]() |
Luca de Alfaro, Vassilis Polychronopoulos and Neoklis Polyzotis (University of California, Santa Cruz) | ||
2:20pm-2:40pm | Encore Talk 1 (Gary Hsieh, University of Washington)
You Get Who You Pay for: The Impact of Incentives on Participation Bias Gary Hsieh and Rafal Kocielnik. In Proceedings CSCW 2016: the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing, pp. 823-835, 2016. Best Paper Award. |
||
2:40pm-3:10pm | Break | ||
3:10pm-4:30pm
FP3: Education and IncentivesSession Chair: Jeff Bigham (Carnegie Mellon U.) |
Practical Peer Prediction for Peer Assessment | Victor Shnayder and David Parkes (Harvard University) | |
Predicting Crowd Work Quality under Monetary Interventions | Ming Yin and Yiling Chen (Harvard University) | ||
Crowdclass: Designing classification-based citizen science learning modules | Doris Jung-Lin Lee, Joanne Lo, Moonhyok Kim and Eric Paulos (University of California, Berkeley) | ||
Crowdsourcing Accurate and Creative Word Problems and Hints | Yvonne Chen, Travis Mandel, Yun-En Liu and Zoran Popović (University of Washington) | ||
4:30pm-5:10pm | Industry Panel 1: When AI runs the show, what's left for crowd platforms?
Recent advances in artificial intelligence owe much of their success to the efforts of thousands of annotators on crowd marketplaces. The success of these platforms may be their own undoing. As more and more human computation tasks become solvable via artificial intelligence, existing systems will need to reinvent themselves to adapt to these new realities or die. Real-world crowd labor platforms like Uber have already started replacing human drivers with artificial intelligence systems. In a world where sophisticated deep learning systems can handle most of the activities originally supporting crowd platforms, what work will be left for humans do in labor marketplaces? What choices should platform designers and owners make to protect their relationships with participants and ensure continued liquidity? What responsibility do platforms have to the annotators that made their models possible? |
||
5:10pm-5:50pm | Industry Panel 2: When Are Workers Inputs and When Are They Users?
Crowd application designers face multiple choices in how to interact with members of the crowd – choices that affect both the quality of work for an application's customers and the quality of life for individuals carrying out the work. Most discussion has historically sought to apply pressure to platforms to make good choices here. As human computation systems utilize increasingly large groups around the world for more sophisticated work than before, what choices should application designers make in order to balance the need for effective outcomes with the needs of participants on these platforms? Are these choices at odds with early-stage applications? What should change as a working relationship is increasingly long-term and based on increasingly complex work? What research questions are left to understand best practices in application design from both perspectives? |
||
5:50pm-6:00pm | Announcement of Paper Awards
Presenter: Jeff Bigham (CMU), on behalf of HCOMP 2016's Senior Program Committee |
||
6:00pm- 8:00pm | Opening Reception (held at venue)
Creative Halloween costumes are encouraged (but completely optional). Austin's Lucy in Disguise offers a great local selection to choose from. |
9:00am-10:00am | Keynote Talk: Aashish Goel (Stanford) | ||
10:00am-10:30am | Break | ||
10:30-11:50am
FP4: Collecting Data You WantSession Chair: Dan Weld (U. of Washington) |
Modeling Task Complexity in Crowdsourcing | Jie Yang (Delft University of Technology), Judith Redi (Delft University of Technology), Gianluca Demartini (University of Sheffield) and Alessandro Bozzon (Delft University of Technology) | |
Studying the Effects of Task Notification Policies on Participation and Outcomes in On-the-go Crowdsourcing | Yongsung Kim, Emily Harburg, Shana Azria, Aaron Shaw, Elizabeth Gerber, Darren Gergle and Haoqi Zhang (Northwestern University) | ||
Extending Workers' Attention Span Through Dummy Events![]() |
Avshalom Elmalech (Bar-Ilan University), David Sarne (Bar-Ilan University), Esther David (Ashkelon College) and Chen Hajaj (Bar-Ilan University) | ||
Much Ado About Time: Exhaustive Annotation of Temporal Data | Gunnar Sigurdsson (Carnegie Mellon University), Olga Russakovsky (Carnegie Mellon University), Ivan Laptev (INRIA, France), Ali Farhadi (University of Washington) and Abhinav Gupta (Carnegie Mellon University) | ||
11:50am-1:00pm | Lunch (see food and dining options) | ||
1:00pm-1:20pm | Platinum Sponsor Talk (Matt Bencke, Co-Founder & CEO, Spare5)
Tips for Sourcing Better Training Data: Takeaways from contributing to the next generation of computer vision training data
|
||
1:20pm-2:40pm
FP5: Task Design for Better CrowdsourcingSession Chair: Praveen Paritosh (Google) |
Crowdsourcing Relevance Assessments: The Unexpected Benefits of Limiting the Time to Judge | Eddy Maddalena (University of Udine), Marco Basaldella (University of Udine), Dario De Nart (University of Udine), Dante Degl'Innocenti (University of Udine), Stefano Mizzaro (University of Udine) and Gianluca Demartini (University of Sheffield) | |
MicroTalk: Using Argumentation to Improve Crowdsourcing Accuracy | Ryan Drapeau, Lydia B. Chilton, Jonathan Bragg and Daniel S. Weld (University of Washington) | ||
Why Is That Relevant? Collecting Annotator Rationales for Relevance Judgments![]() |
Tyler Mcdonnell (University of Texas at Austin), Matthew Lease (University of Texas at Austin), Mucahid Kutlu (Qatar University) and Tamer Elsayed (Qatar University) | ||
Interactive Consensus Agreement Games For Labeling Images | Paul Upchurch, Daniel Sedra, Andrew Mullen, Haym Hirsh and Kavita Bala (Cornell University) | ||
2:40-3:00pm | Encore Talk 2: (Shomir Wilson, University of Cincinnati)
Crowdsourcing annotations of websites' privacy policies: Can it really work? Shomir Wilson, Florian Schaub, Rohan Ramanath, Norman Sadeh, Fei Liu, Noah Smith and Frederick Liu. In Proceedings of the 25th International World Wide Web (WWW) Conference, pp. 133-143, 2016. Best Paper Finalist. |
||
3:00pm-3:30pm | Poster/Demo Madness 1 | ||
3:30pm-4:50pm | Poster/Demo Session 1 | ||
Demonstrations
Creating Interactive Behaviors in Early Sketch by Recording and Remixing Crowd Demonstrations
Integrating Citizen Science with Online Learning to Ask Better Questions
Ashwin: Plug-and-Play System for Machine-Human Image Annotation
|
|||
Works-in-Progress Posters
A Markov Chain Based Ensemble Method for Crowdsourced Clustering
Consensus of Dependent Opinions
Pairwise, Magnitude, or Stars: What's the Best Way for Crowds to Rate?
Toward Crowdsourced User Studies for Software Evaluation
Feedback and Timing in a Crowdsourcing Game
A GWAP Approach to Analyzing Informal Algorithm for Collaborative Group Problem Solving
Learning Phrasal Lexicons for Robotic Commands Using Crowdsourcing
Redesigning Product X as an Inversion-Problem Game
Extremest Extraction: Push-Button Learning of Novel Events
The Effect of Class Imbalance and Order on Crowdsourced Relevance Judgments
Feasibility of Post-Editing Speech Transcriptions with a Mismatched Crowd
Video Summarization using Causality Graphs
Crowdsourcing Information Extraction for Biomedical Systematic Reviews
|
|||
Encore Track Posters
Hybrid Human-Machine Information Systems: Challenges and Opportunities
Scheduling Human Intelligence Tasks in Multi-Tenant Crowd-Powered Systems
Learning to Incentivize: Eliciting Effort via Output Agreement
On The Relation between Assessor’s Agreement and Accuracy in Gamified Relevance Assessment
|
|||
Doctoral Consortium Posters
Crowd-Powered Conversational Agents
Crowdsourcing to Reconnect: Enabling Online Contributions and Social Interactions for Older Adults
Application of the Dual-Process Theory to Debias Forecasts in Prediction Markets
Understanding Online Self-Organization and Distributed Problem Solving in Disaster
Quantifying, Understanding, and Mitigating Crowd Work Bias
Incentive Engineering in Crowdsourcing Systems
|
|||
5:00pm-6:00pm | Remembrance: David B. Martin and his Ethnographic Studies of Crowd Workers
Presenter: Benjamin Hanrahan (Penn State University) |
||
6:30pm | Buses depart for Offsite Reception | ||
7:00pm-10:00pm | Offsite Reception at Maggie Mae's on Sixth Street |
9:00am-10:00am | Keynote Talk: Nathan Schneider (U. of Colorado, Boulder) | ||
10:00am-10:30am | Break | ||
10:30am-11:50am
FP6: Crowdsourcing Subjective ThingsSession Chair: Jaime Teevan (Microsoft Research) |
Learning Privacy Expectations by Crowdsourcing Contextual Informational Norms | Yan Shvartzshnaider (New York University), Schrasing Tong (Princeton University) Thomas Wies (New York University), Paula Kift (New York University), Helen Nissenbaum (New York University), Lakshminarayanan Subramanian (New York University), and Prateek Mittal (Princeton University) | |
Leveraging the Contributions of the Casual Majority to Identify Appealing Web Content | Tad Hogg (Institute for Molecular Manufacturing, USA) and Kristina Lerman (University of Southern California) | ||
Evaluating Task-Dependent Taxonomies for Navigation | Yuyin Sun (University of Washington), Adish Singla (ETH Zurich), Tori Yan (University of Washington), Andreas Krause (ETH Zurich) and Dieter Fox (University of Washington) | ||
Validating the Quality of Crowdsourced Psychometric Personality Test Items | Bao S. Loe (University of Cambridge), Francis Smart (Michigan State University), Lenka Fiřtová (University of Economics, Prague), Corinna Brauner (University of Münster), Laura Lüneborg (University of Bonn) and David Stillwell (University of Cambridge) | ||
11:50am-1:10pm | Lunch (see food and dining options) | ||
1:10pm-1:30pm | Encore Talk 3: (Joseph Chang, CMU)
Alloy: Clustering with Crowds and Computation Joseph Chee Chang, Aniket Kittur, and Nathan Hahn. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp. 3180-3191. Best Paper Honorable Mention. |
||
1:30pm-2:00pm | Poster/Demo Madness 2 | ||
2:00pm-3:20pm | Poster/Demo Session 2 | ||
Demonstrations
MmmTurkey: A Crowdsourcing Framework for Deploying Tasks and Recording Worker Behavior on Amazon Mechanical Turk
High Dimensional Human Guided Machine Learning
Reprowd: Crowdsourced Data Processing Made Reproducible
|
|||
Works-in-Progress Posters
Feature Based Task Recommendation in Crowdsourcing with Implicit Observations
Route Market: A Prediction-Market-Based Route Recommendation Service
Incentives for Truthful and Informative Post-Publication Peer Review
Incentives for Truthful Evaluations
Visual Questions: Predicting If a Crowd Will (Dis)Agree on the Answer
Group Rotation Type Crowdsourcing
Qualitative Framing of Financial Incentives – A Case of Emotion Annotation
Dynamic Allocation of Crowd Contributions for Sentiment Analysis during the 2016 U.S. Presidential Election
Exploring Required Work and Progress Feedback in Crowdsourced Mapping
CoFE: A Collaborative Feature Engineering Framework for Data Science
Incentive Engineering Framework for Crowdsourcing Systems
Does Communication Help People Coordinate?
|
|||
Encore Track Posters
Predicting the Quality of User Contributions via LSTMs
Toward a Learning Science for Complex Crowdsourcing Tasks
Crowd in C[loud] : Audience Participation Music with Online Dating Metaphor Using Cloud Service
Comparative Methods and Analysis for Creating High-Quality Question Sets from Crowdsourced Data
|
|||
Doctoral Consortium Posters
Understanding User Action and Behavior on Collaborative Platforms Using Machine Learning
Self-Improving Crowdsourcing
How Crowdsensing and Crowdsourcing Change Collaborative Planning: Three explorations in transportation
Integrating Asynchronous Interaction into Real-Time Collaboration for Crowdsourced Creation
Complex Systems and Society: What Are the Barriers to Automated Text Summarization of an Online Policy Deliberation?
Researching and Learning History via Crowdsourcing
|
|||
3:30pm-5:00pm
FP7: Quality ModelsSession Chair: Adam Kalai (Microsoft Research New England) |
Probabilistic Modeling for Crowdsourcing Partially-Subjective Ratings | An T. Nguyen (University of Texas at Austin), Matthew Halpern (University of Texas at Austin), Byron C. Wallace (Northeastern University) and Matthew Lease (University of Texas at Austin) | |
State Detection using Adaptive Human Sensor Sampling | Ioannis Boutsis (Athens University of Economics and Business), Vana Kalogeraki (Athens University of Economics and Business) and Dimitrios Gunopulos (University of Athens) | ||
Quality Estimation of Workers in Collaborative Crowdsourcing Using Group Testing | Prakhar Ojha and Partha Talukdar (Indian Institute of Science) | ||
Understanding Crowdsourcing Workflow: Modeling and Optimizing Iterative and Parallel Processes | Shinsuke Goto, Toru Ishida and Donghui Lin (Kyoto University) | ||
5:00pm-6:00pm | Business Meeting (see slides) | ||
6:00pm-? | CrowdCamp Social Evening (CrowdCamp participants only) |