Accepted Papers

  • Tracks

Accepted papers are grouped by track, with a link to each track group provided below.

  • Demonstrations

Ashwin: Plug-and-Play System for Machine-Human Image Annotation
  • Anand Sriraman, Mandar Kulkarni, Rahul Kumar, Kanika Kalra, Purushotam Radadia and Shirish Karande (TCS Research, Tata Consultancy Services)
Creating Interactive Behaviors in Early Sketch by Recording and Remixing Crowd Demonstrations
  • Sang Won Lee (University of Michigan), Yi Wei Yang (University of Michigan), Shiyan Yan (University of Michigan), Yujin Zhang (University of Michigan), Isabelle Wong (University of Michigan), Zhengxi Tan (University of Michigan), Miles McGruder (University of Michigan), Christopher Homan (Rochester Institute of Technology), Walter S. Lasecki (University of Michigan)
High Dimensional Human Guided Machine Learning
  • Eric Holloway, Robert Marks II (Dept. Elec. and Comp. Eng., Baylor University)
Integrating citizen science with online learning to ask better questions
  • Vineet Pandey (Design Lab, UC San Diego), Scott Klemmer (Design Lab, UC San Diego), Amnon Amir (Department of Pediatrics, UC San Diego), Justine Debelius (Department of Pediatrics, UC San Diego), Embriette R. Hyde (Department of Pediatrics, UC San Diego), Tomasz Kosciolek (Department of Pediatrics, UC San Diego), Rob Knight (Department of Pediatrics, UC San Diego)
MmmTurkey: A Crowdsourcing Framework for Deploying Tasks and Recording Worker Behavior on Amazon Mechanical Turk
  • Brandon Dang, Miles Hutson and Matt Lease (University of Texas, Austin)
Reprowd: Crowdsourced Data Processing Made Reproducible
  • Ruochen Jiang and Jiannan Wang (Simon Fraser University)
  • Doctoral Consortium

Accepted Doctoral Consortium participants will present posters at the main conference as well as participating in the DC itself.

Application of the Dual-Process Theory to debias Forecasts in Prediction Markets
  • Simon Kloker (KIT)
Complex Systems and Society: What are the barriers to automated text summarization of an online policy deliberation?
  • Brian McInnis (Cornell University)
Crowd-Powered Conversational Agents
  • Ting-Hao Huang (Carnegie Mellon University)
Crowdsourcing to reconnect: Enabling online contributions and social interactions for older adults
  • Francisco Ibarra (University of Trento)
How Crowdsensing and Crowdsourcing Change Collaborative Planning: three explorations in transportation
  • Greg Griffin (The University of Texas at Austin)
Incentive Engineering in Crowdsourcing Systems
  • Nhat Truong (University of Southampton)
Integrating Asynchronous Interaction into Real-time Collaboration for Crowdsorced Creation
  • Sang Won Lee (University of Michigan)
Quantifying, Understanding, and Mitigating Crowd Work Bias
  • Jacob Thebault-Spieker (GroupLens Research)
Researching and Learning History via Crowdsourcing
  • Nai-Ching Wang (Virginia Tech)
Self-improving Crowdsourcing
  • Jonathan Bragg (University of Washington)
Understanding Online Self-organization and Distributed Problem Solving in Disaster
  • Marina Kogan (University of Colorado Boulder)
Understanding User Action and Behavior on Collaborative Platforms using Machine Learning
  • Rakshit Agrawal (University of California, Santa Cruz)
  • Encore Papers

Invited Papers
You Get Who You Pay for: The Impact of Incentives on Participation Bias
  • Gary Hsieh and Rafal Kocielnik (University of Washington)
  • Proceedings CSCW 2016: the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing, pp. 823-835. ACM, 2016.
  • CSCW 2016 Best Paper Award
Crowdsourcing annotations of websites' privacy policies: Can it really work?
  • Shomir Wilson (Carnegie Mellon University), Florian Schaub (Carnegie Mellon University), Rohan Ramanath (Carnegie Mellon University), Norman Sadeh (Carnegie Mellon University), Fei Liu (University of Central Florida), Noah Smith (University of Washington), and Frederick Liu (Carnegie Mellon University)
  • Proceedings of the 25th International World Wide Web (WWW) Conference, pp. 133-143, 2016.
  • WWW 2016 Best Paper Finalist
Alloy: Clustering with Crowds and Computation
  • Joseph Chee Chang, Aniket Kittur, and Nathan Hahn (Carnegie Mellon University)
  • Proceedings of CHI 2016: the 34th ACM Conference on Human Factors in Computing Systems, pp. 3180-3191.
  • CHI 2016 Best Paper Honorable Mention

Accepted Papers
Comparative Methods and Analysis for Creating High-Quality Question Sets from Crowdsourced Data
  • Sarah KK Luger and Jeff Bowles (University of Edinburgh)
  • Proceedings of the Twenty-Ninth International Florida Artificial Intelligence Research Society (FLAIRS) Conference; Special Track: Applications of Artificial Intelligence in Business and Industry, pp. 185-190. 2016.
Crowd in C [loud]: Audience Participation Music with Online Dating Metaphor using Cloud Service
  • Sang Won Lee (University of Michigan), Antonio Deusany de Carvalho Jr (Universidade de São Paulo), and Georg Essl (University of Michigan)
  • Proceedings of the 2nd Web Audio Conference (WAC-2016), Atlanta. 2016.
Hybrid human–machine information systems: Challenges and opportunities
  • Gianluca Demartini (University of Sheffield)
  • Computer Networks v. 90, pp. 5-13, October 2015.
Learning to Incentivize: Eliciting Effort via Output Agreement
  • Yang Liu and Yiling Chen (Harvard University)
  • Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI), pp. 3782-3788, 2016.
On the Relation Between Assessor's Agreement and Accuracy in Gamified Relevance Assessment
  • Olga Megorskaya, Vladimir Kukushkin, and Pavel Serdyukov (Yandex)
  • Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 605-614. 2015.
Predicting the quality of user contributions via LSTMs
  • Rakshit Agrawal and Luca Dealfaro (University of California, Santa Cruz)
  • International Symposium on Open Collaboration (OpenSym, formerly WikiSym), 2016.
Scheduling Human Intelligence Tasks in Multi-Tenant Crowd-Powered Systems
  • Djellel Eddine Difallah (University of Fribourg), Gianluca Demartini (University of Sheffield), and Philippe Cudré-Mauroux (University of Fribourg)
  • Proceedings of the 25th International Conference on World Wide Web (WWW), pp. 855-865, 2016.
Toward a Learning Science for Complex Crowdsourcing Tasks
  • Shayan Doroudi (Carnegie Mellon University), Ece Kamar (Microsoft Research), Emma Brunskill (Carnegie Mellon University), and Eric Horvitz (Microsoft Research)
  • Proceedings of the 2016 ACM CHI Conference on Human Factors in Computing Systems, pp. 2623-2634. 2016.
  • Full Papers

Please see the online Conference Proceedings to download full papers.

Click Carving: Segmenting Objects in Video with Point Clicks
  • Suyog Dutt Jain and Kristen Grauman (University of Texas at Austin)
Crowdclass: Designing Classification-Based Citizen Science Learning Modules
  • Doris Jung-Lin Lee, Joanne Lo, Moonhyok Kim and Eric Paulos (University of California, Berkeley)
Crowdsourcing Accurate and Creative Word Problems and Hints
  • Yvonne Chen, Travis Mandel, Yun-En Liu and Zoran Popović (University of Washington)
Crowdsourcing Relevance Assessments: The Unexpected Benefits of Limiting the Time to Judge
  • Eddy Maddalena (University of Udine), Marco Basaldella (University of Udine), Dario De Nart (University of Udine), Dante Degl'Innocenti (University of Udine), Stefano Mizzaro (University of Udine) and Gianluca Demartini (University of Sheffield)
CRQA: Crowd-Powered Real-Time Automatic Question Answering System
  • Denis Savenkov and Eugene Agichtein (Emory University)
Efficient Techniques for Crowdsourced Top-k Lists
  • Luca de Alfaro, Vassilis Polychronopoulos and Neoklis Polyzotis (University of California, Santa Cruz)
Evaluating Task-Dependent Taxonomies for Navigation
  • Yuyin Sun (University of Washington), Adish Singla (ETH Zurich), Tori Yan (University of Washington), Andreas Krause (ETH Zurich) and Dieter Fox (University of Washington)
Extending Workers' Attention Span Through Dummy Events
  • Avshalom Elmalech (Bar-Ilan University), David Sarne (Bar-Ilan University), Esther David (Ashkelon College) and Chen Hajaj (Bar-Ilan University)
Interactive Consensus Agreement Games For Labeling Images
  • Paul Upchurch, Daniel Sedra, Andrew Mullen, Haym Hirsh and Kavita Bala (Cornell University)
Investigating the Influence of Data Familiarity to Improve the Design of a Crowdsourcing Image Annotation System
  • Danna Gurari (University of Texas at Austin), Mehrnoosh Sameki (Boston University) and Margrit Betke (Boston University)
"Is there anything else I can help you with?": Challenges in Deploying an On-Demand Crowd-Powered Conversational Agent
  • Ting-Hao (Kenneth) Huang (Carnegie Mellon University), Walter S. Lasecki (University of Michigan), Amos Azaria (Ariel University) and Jeffrey P. Bigham (Carnegie Mellon University)
Learning and Feature Selection under Budget Constraints in Crowdsourcing
  • Besmira Nushi, Adish Singla, Andreas Krause and Donald Kossmann (ETH Zurich)
Learning Privacy Expectations by Crowdsourcing Contextual Informational Norms
  • Yan Shvartzshnaider (New York University), Schrasing Tong (Princeton University) Thomas Wies (New York University), Paula Kift (New York University), Helen Nissenbaum (New York University), Lakshminarayanan Subramanian (New York University), and Prateek Mittal (Princeton University)
Learning to Scale Payments in Crowdsourcing with PropeRBoost
  • Goran Radanovic and Boi Faltings (Ecole Polytechnique Fédérale de Lausanne)
Leveraging the Contributions of the Casual Majority to Identify Appealing Web Content
  • Tad Hogg (Institute for Molecular Manufacturing, USA) and Kristina Lerman (University of Southern California)
MicroTalk: Using Argumentation to Improve Crowdsourcing Accuracy
  • Ryan Drapeau, Lydia B. Chilton, Jonathan Bragg and Daniel S. Weld (University of Washington)
Modeling Task Complexity in Crowdsourcing
  • Jie Yang (Delft University of Technology), Judith Redi (Delft University of Technology), Gianluca Demartini (University of Sheffield) and Alessandro Bozzon (Delft University of Technology)
Much Ado about Time: Exhaustive Annotation of Temporal Data
  • Gunnar A. Sigurdsson (Carnegie Mellon University), Olga Russakovsky (Carnegie Mellon University), Ivan Laptev (INRIA, France), Ali Farhadi (University of Washington) and Abhinav Gupta (Carnegie Mellon University)
Practical Peer Prediction for Peer Assessment
  • Victor Shnayder and David C Parkes (Harvard University)
Predicting Crowd Work Quality under Monetary Interventions
  • Ming Yin and Yiling Chen (Harvard University)
Probabilistic Modeling for Crowdsourcing Partially-Subjective Ratings
  • An T. Nguyen (University of Texas at Austin), Matthew Halpern (University of Texas at Austin), Byron C. Wallace (Northeastern University) and Matthew Lease (University of Texas at Austin)
Quality Estimation of Workers in Collaborative Crowdsourcing Using Group Testing
  • Prakhar Ojha and Partha Talukdar (Indian Institute of Science)
State Detection using Adaptive Human Sensor Sampling
  • Ioannis Boutsis (Athens University of Economics and Business), Vana Kalogeraki (Athens University of Economics and Business) and Dimitrios Gunopulos (University of Athens)
Studying the Effects of Task Notification Policies on Participation and Outcomes in On-the-go Crowdsourcing
  • Yongsung Kim, Emily Harburg, Shana Azria, Aaron Shaw, Elizabeth Gerber, Darren Gergle and Haoqi Zhang (Northwestern University)
Understanding Crowdsourcing Workflow: Modeling and Optimizing Iterative and Parallel Processes
  • Shinsuke Goto, Toru Ishida and Donghui Lin (Kyoto University)
Validating the Quality of Crowdsourced Psychometric Personality Test Items
  • Bao Sheng Loe (University of Cambridge), Francis Smart (Michigan State University), Lenka Firtova (University of Economics, Prague), Corinna Brauner (University of Münster), Laura Lueneborg (University of Bonn) and David Stillwell (University of Cambridge)
Why Is That Relevant? Collecting Annotator Rationales for Relevance Judgments
  • Tyler Mcdonnell (University of Texas at Austin), Matthew Lease (University of Texas at Austin), Mucahid Kutlu (Qatar University) and Tamer Elsayed (Qatar University)
  • Works-in-Progress

Accepted works-in-Progress papers are non-archival will be presented as posters at the conference. Some authors have also chosen to share an online copy of their paper, in which case the paper title links to the online copy.

A GWAP Approach to Analyzing Informal Algorithm for Collaborative Group Problem Solving
  • Tatsuki Furukawa, Hajime Mizuyama and Tomomi Nonaka (Aoyama Gakuin Univerisity)
A Markov Chain based Ensemble Method for Crowdsourced Clustering
  • Sujoy Chatterjee, Enakshi Kundu and Anirban Mukhopadhyay (University of Kalyani)
Exploring Required Work and Progress Feedback in Crowdsourced Mapping
  • Sofia Eleni Spatharioti (Northeastern University), Becca Govoni (Northeastern University), Jennifer Carrera (Michigan State University), Sara Wylie (Northeastern University) and Seth Cooper (Northeastern University)
CoFE: A Collaborative Feature Engineering Framework for Data Science
  • Yoshihiko Suhara (MIT Media Lab), Hideki Awashima (Recruit Institute of Technology), Hidekazu Oiwa (Recruit Institute of Technology) and Alex Pentland (MIT Media Lab)
Consensus of Dependent Opinions
  • Sujoy Chatterjee (University of Kalyani), Anirban Mukhopadhyay (University of Kalyani), Malay Bhattacharyya (Indian Institute of Engineering Science and Technology, Shibpur)
Crowdsourcing Information Extraction for Biomedical Systematic Reviews
  • Yalin Sun (the University of Texas at Austin), Pengxiang Cheng (the University of Texas at Austin), Shengwei Wang (the University of Texas at Austin), Hao Lyu (the University of Texas at Austin), Matthew Lease (the University of Texas at Austin), Iain Marshall (King's College London), Byron C. Wallace (Northeastern University)
Does Communication Help People Coordinate?
  • Yevgeniy Vorobeychik, Zlatko Joveski and Sixie Yu (Vanderbilt University)
Dynamic Allocation of Crowd Contributions for Sentiment Analysis during the 2016 U.S. Presidential Election
  • Mehrnoosh Sameki, Mattia Gentil, Kate K. Mays, Lei Guo and Margrit Betke (Boston University)
Extremest Extraction: Push-Button Learning of Novel Events
  • Christopher Lin (University of Washington), Mausam (IIT Delhi), and Daniel Weld (University of Washington, CSE)
Feasibility of Post-Editing Speech Transcriptions with a Mismatched Crowd
  • Purushotam Radadia (Tata Consultancy Services), Shirish Karande (TRDDC)
Feature Based Task Recommendation in Crowdsourcing with Implicit Observations
  • Habibur Rahman (University of Texas at Arlington), Lucas Joppa (Microsoft Research), and Senjuti Basu Roy (New Jersey Institute of Technology)
Feedback and Timing in a Crowdsourcing Game
  • Gili Freedman, Sukdith Punjasthitkul, Max Seidman, Mary Flanagan (Dartmouth College)
Group Rotation Type Crowdsourcing
  • Katsumi Kumai (University of Tsukuba), Jianwei Zhang (Tsukuba University of Technology), Yuhki Shiraishi (Tsukuba University of Technology), Atsuyuki Morishima (University of Tsukuba) and Hiroyuki Kitagawa (University of Tsukuba)
Incentive Engineering Framework for Crowdsourcing Systems
  • Nhat V.Q. Truong (University of Southampton), Sebastian Stein (University of Southampton), Long Tran-Thanh (University of Southampton), Nicholas R. Jennings (Imperial College London)
Incentives for Truthful and Informative Post-Publication Peer Review
  • Luca de Alfaro (UC Santa Cruz) and Marco Faella (University of Naples)
Incentives for Truthful Evaluations
  • Luca de Alfaro (University of California Santa Cruz), Marco Faella (University of Naples “Federico II”), Vassilis Polychronopoulos (University of California Santa Cruz), and Michael Shavlovsky (University of California Santa Cruz)
Learning Phrasal Lexicons for Robotic Commands using Crowdsourcing
  • Njie Hu, Jean Oh, Anatole Gershman (Carnegie Mellon University)
Pairwise, Magnitude, or Stars: What's the Best Way for Crowds to Rate?
  • Alessandro Checco and Gianluca Demartini (University of Sheffield)
Qualitative Framing of Financial Incentives – A Case of Emotion Annotation
  • Sephora Madjiheurem (EPFL), Valentina Sintsova (EPFL) and Pearl Pu (Human Computer Interaction Group, Swiss Federal Institute of Technology in Lausanne)
Redesigning Product X as an Inversion-Problem Game
  • Yu Kagoshima, Hajime Mizuyama and Tomomi Nonaka (Aoyama Gakuin Univerisity)
Route Market: A Prediction-Market-Based Route Recommendation Service
  • Keisuke Beppu, Hajime Mizuyama and Tomomi Nonaka (Aoyama Gakuin Univerisity)
The Effect of Class Imbalance and Order on Crowdsourced Relevance Judgments
  • Rehab K. Qarout, Alessandro Checco and Gianluca Demartini (University of Sheffield)
Toward Crowdsourced User Studies for Software Evaluation
  • Florian Daniel (Politecnico di Milano), Pavel Kucherbaev (University of Trento)
Visual Questions: Predicting If a Crowd Will (Dis)Agree on the Answer
  • Danna Gurari and Kristen Grauman (University of Texas at Austin)
Video Summarization using Causality Graphs
  • Shay Sheinfeld (IDC, Herzliya), Yotam Gingold (George Mason University) and Ariel Shamir (IDC, Herzliya)