Accepted Papers

  • Tracks

Accepted papers are grouped by track, with a link to each track group provided below.

Full Papers

A 10-Month-Long Deployment Study of On-Demand Recruiting for Low-Latency Crowdsourcing
  • Ting-Hao (Kenneth) Huang and Jeffrey P. Bigham (CMU)
A Lightweight and Real-Time Worldwide Earthquake Detection and Monitoring System Based on Citizen Sensors
  • Jazmine Maldonado, Jheser Guzman, and Barbara Poblete (University of Chile)
A Trust-Based Coordination System for Participatory Sensing Applications
  • Alexandros Zenonos, Sebastian Stein and Nick R. Jennings (Imperial College London)
"But you Promised": Methods to Improve Crowd Engagement In Non-Ground Truth Tasks
  • Avshalom Elmalech and Barbara J. Grosz (Harvard)
Cioppino: Multi-Tenant Crowd Management
  • Daniel Haas (UC Berkeley) and Michael J. Franklin (University of Chicago)
Confusing the Crowd: Task Instruction Quality on Amazon Mechanical Turk
  • Meng-Han Wu and Alexander J. Quinn (Purdue)
CrowdMask: Using Crowds to Preserve Privacy in Crowd-Powered Systems via Progressive Filtering
  • Harmanpreet Kaur (University of Michigan), Mitchell Gordon (University of Rochester), Yiwei Yang (University of Michigan), Jeffrey P. Bigham (CMU), Jaime Teevan (Microsoft Research), Ece Kamar (Microsoft Research) and Walter S. Lasecki (University of Michigan)
Crowd-O-Meter: Predicting if a Person is Vulnerable to Believe Political Claims
  • Mehrnoosh Sameki (Boston University), Tianyi Zhang (UT Austin), Linli Ding (UT Austin), Margrit Betke (Boston University) and Danna Gurari (UT Austin)
Crowdsourcing a Parallel Corpus for Conceptual Analysis of Natural Language
  • Jamie C. Macbeth and Sandra Grandic (Fairfield University)
Crowdsourcing Paper Screening in Systematic Literature Reviews
  • Evgeny Krivosheev (University of Trento), Fabio Casati (University of Trento & Tomsk Polytechnic University), Valentina Caforio (University of Trento), and Boualem Benatallah (University of New South Wales)
Deja Vu: Characterizing Worker Reliability Using Task Consistency
  • Alex C. Williams (Waterloo), Joslin Goh (Waterloo), Charlie G. Willis (Harvard), Aaron M. Ellison (Harvard), James H. Brusuelas (Oxford), Charles C. Davis (Harvard) and Edith Law (Waterloo)
Drafty: Enlisting Users to be Editors who Maintain Structured Data
  • Shaun Wallace (Brown), Lucy Van Kleunen (Brown), Marianne Aubin-Le Quere (Brown), Abraham Peterkin (Brown), Yirui Huang (University of Toronto) and Jeff Huang (Brown)
Dynamic Filter: Adaptive Query Processing with the Crowd
  • Doren Lan, Katherine Reed, Austin Shin and Beth Trushkowsky (Harvey Mudd College)
Effective Prize Structure for Simple Crowdsourcing Contests with Participation Costs
  • David Sarne and Michael Lepioshkin (Bar Ilan University)
Evaluating Visual Conversational Agents via Cooperative Human-AI Games
  • Prithvijit Chattopadhyay (Georgia Tech), Deshraj Yadav (Georgia Tech), Viraj Prabhu (Georgia Tech), Arjun Chandrasekaran (Georgia Tech), Abhishek Das (Georgia Tech), Stefan Lee (Georgia Tech), Dhruv Batra (Georgia Tech and Facebook AI Research) and Devi Parikh (Georgia Tech and Facebook AI Research)
Gender Differences in Equity Crowdfunding
  • Emoke-Agnes Horvat (Northwestern) and Theodore Papamarkou (University of Glasgow)
Lessons from an Online Massive Genomics Computer Game
  • Akash Singh, Faizy Ahsan, Mathieu Blanchette and Jerome Waldispuhl (McGill University)
Let's Agree to Disagree: Fixing Agreement Measures for Crowdsourcing
  • Alessandro Checco (University of Sheffield), Kevin Roitero (University of Udine), Eddy Maddalena (University of Southampton), Stefano Mizzaro (University of Udine) and Gianluca Demartini (University of Queensland)
Leveraging Side Information to Improve Label Quality Control in Crowdsourcing
  • Yuan Jin (Monash University), Mark Carman (Monash University), Dongwoo Kim (Australian National University) and Lexing Xie (Australian National University)
Octopus: A Framework for Cost-Quality-Time Optimization in Crowdsourcing
  • Karan Goel (CMU), Shreya Rajpal (UIUC) and Mausam (IIT-Delhi)
Revenue-Maximizing Stable Pricing in Online Labor Markets
  • Chaolun Xia and Shan Muthukrishnan (Rutgers)
Supporting ESL Writing by Prompting Crowdsourced Structural Feedback
  • Yi-Ching Huang (National Taiwan University), Jiunn-Chia Huang (National Taiwan University), Hao-Chuan Wang (National Tsing Hua University) and Jane Yung-jen Hsu (National Taiwan University and Intel-NTU Connected Context Computing Center)
Supporting Image Geolocation with Diagramming and Crowdsourcing
  • Rachel Kohler, John Purviance and Kurt Luther (Virginia Tech)
Toward Scalable Social Alt Text: Conversational Crowdsourcing as a Tool for Refining Vision-to-Language Technology for the Blind
  • Elliot Salisbury (University of Southampton), Ece Kamar (Microsoft Research) and Meredith Ringel Morris (Microsoft Research)

Works-in-Progress & Demonstrations

Accepted works-in-Progress & Demonstrations are non-archival and will be presented as posters at the conference. Some authors have also chosen to share an online copy of their paper, in which case the paper title links to the online copy.

A Human-Computation Platform for Multi-Scale Genome Analysis
  • Anjum Ibna Matin, Akash Singh, Chris Drogaris, Mardel Maduro, Elena Nazarova, Mathieu Blanchette, Olivier Tremblay Savard and Jerome Waldispuhl
A Million Passwords in Minutes: Humanly Usable and Secure Password Strategies
  • Samira Samadi and Santosh Vempala
A System for Composing Music in Collaboration with Musicians
  • Noriko Otani, Daisuke Okabe and Masayuki Numao
Asking the Right Way: Design Interventions for Microtasks
  • Peter Organisciak, Jaime Teevan and Michael B. Twidale
Capturing Information Sharing Strategy in a Problem-Solving Team Playing ColPMan Game
  • Tatsuki Furukawa, Tomomi Nonaka and Hajime Mizuyama
CIFT: Crowd-Informed Fine-Tuning to Improve Machine Learning Ability
  • John Lalor, Hao Wu and Hong Yu
Comparing the Motivational Effects of Altruism, Payment and Reputaiton on Engagement with Human Computaiton Tasks
  • Salem Alamri, Paul Kwan and William Billingsley
Creating Curriculum Of Unknown Unknows By Battling Machine Opponents
  • Anand Sriraman and Shirish Karande
Crowdsourcing real-time viral disease and pest information. A case of nation-wide cassava disease surveillance in a developing country.
  • Daniel Mutembesa, Ernest Mwebaze, Anthony Pariyo and Christopher Omongo
Crowdsourcing the Transcription of Privacy-sensitive Historical Handwritten Documents
  • Unmil Karadkar and Wuxi Li
Decoded: Crowdsourcing Summaries of Internet Privacy Policies
  • Grace Arnold, Elizabeth Hamp, Mara Levy, Yoni Nachmany and Chris Callison-Burch
Decomposing Difficult Tasks into More Simple Ones: Crowdsourcing Taxonomy Creation
  • Sarah Luger, Robyn Perry and William Chui
Design Activism for Minimum Wage Crowd Work
  • Akash Mankar, Riddhi Shah and Matthew Lease
From Task Classification Towards Similarity Measures for Recommendation in Crowdsourcing Systems
  • Steffen Schnitzer, Svenja Neitzel and Christoph Rensing
Identifying Unsafe Videos on Online Public Media using Real-time Crowdsourcing
  • Sankar Kumar Mridha, Braznev Sarkar, Sujoy Chatterjee and Malay Bhattacharyya
Modeling real-time student competence with an MDP
  • Jay Patel and Harshadbhai Patel
On-the-Job Learning for Micro-Task Workers
  • Monika Streuer, Steven P. Dow, Markus Krause and Magie Hall
Prototype Tasks: Improving Crowdsourcing Results through Rapid, Iterative Task Design
  • Snehalkumar 'Neil' S. Gaikwad, Nalin Chhibber, Vibhor Sehgal, Alipta Ballav, Catherine Mullings, Ahmed Nasser, Angela Richmond-Fuller, Aaron Gilbee, Dilrukshi Gamage, Mark Whiting, Sharon Zhou, Sekandar Matin, Senadhipathige Niranga, Shirish Goyal, Dinesh Majeti, Preethi Srinivas, Adam Ginzberg, Kamila Mananova, Karolina Ziulkoski, Jeff Regino, Tejas Sarma, Akshansh Sinha, Abhratanu Paul, Christopher Diemert, Mahesh Murag, William Dai, Rajan Vaish and Michael Bernstein
Quality Enhancement by Weighted Rank Aggregation of Crowd Opinion
  • Sujoy Chatterjee, Anirban Mukhopadhyay and Malay Bhattacharyya
Requesters' Personal Values, Just-world beliefs, and Their Choice of Incentive Mechanisms
  • Yuko Sakurai, Masafumi Matsuda and Satoshi Oyama
Self-forming quality teams identify high performing workers
  • Andrew Dennis, Steven P. Dow, Markus Dueker and Markus Krause
Shifts in Rating Bias due to scale saturation
  • Kanika Kalra, Manasi Patwardhan and Shirish Karande
Siamese LSTMs for Translation Post-Edit Ranking
  • Manasi Patwardhan, Kanika Kalra and Shirish Karande
Small Profits and Quick Returns: A Practical Social Welfare Maximizing Incentive Mechanism for Deadline-Sensitive Tasks in Crowdsourcing
  • Duin Back, Bongjun Choi and Jing Chen
Snap’N’Go: Towards Coordinating the Crowd in Mobile Crowd Sensing Platforms
  • Christine Bassem, Hannah Murphy, Amy Qiu and Megan Shum
Socio-technical Revelation of Knowledge Transfer Potentials
  • Jonas Oppenlaender and Jesse Benjamin
Speech-To-Tasks: Real-Time Crowd Generation of Task Lists from Speech
  • Sang Won Lee, Yan Chen and Walter Lasecki
Two Tools Are Better Than One: Tool Diversity as a Means of Improving Aggregate Crowd Performance on an Object Segmentation Task
  • Jean Song, Raymond Fok, Fan Yang, Kyle Wang, Alan Lundgard and Walter Lasecki
Utilizing Crowdsourced Asynchronous Chat for Efficient Collection of Dialogue Dataset
  • Kazushi Ikeda and Keiichiro Hoashi
Weak Labeling for Crowd Learning
  • Iker Beñaran-Muñoz, Jerónimo Hernández-González and Aritz Pérez
Who to Query? Spatially-Blind Participatory Crowdsensing under Budget Constraints
  • Mai Elsherief, Morgan Vigil-Hayes, Ramya Raghavendra and Elizabeth Belding