Accepted Papers

  • Tracks

Accepted papers are grouped by track, with a link to each track group provided below.

Full Papers

"A Game Without Competition is Hardly a Game": The Impact of Competitions on Player Activity in a Human Computation Game
  • Neal Reeves, Peter West and Elena Simperl
Active Learning with Unbalanced Classes & Example-Generation Queries
  • Christopher Lin, Mausam and Daniel Weld
All That Glitters is Gold -- An Attack Scheme on Gold Questions in Crowdsourcing
  • Alessandro Checco, Jo Bates and Gianluca Demartini
An Empirical Study on Short- and Long-term Effects of Self-Correction in Crowdsourced Microtasks
  • Masaki Kobayashi, Hiromi Morita, Masaki Matsubara, Nobuyuki Shimizu and Atsuyuki Morishima
Capturing and Interpreting Ambiguity in Crowdsourcing Frame Disambiguation
  • Anca Dumitrache, Lora Aroyo and Chris Welty
Crowdsourcing real-time viral disease and pest information. A case of nation-wide cassava disease surveillance in a developing country
  • Daniel Mutembesa, Ernest Mwebaze and Christopher Omongo
EURECA: Enhanced Understanding of Real Environments via Crowd Assistance
  • Sai Gouravajhala, Jinyeong Yim, Karthik Desingh, Yanda Huang, Odest Jenkins and Walter Lasecki
How do Crowdworker Communities and Microtask Markets Influence Each Other? A Data-Driven Study on Amazon Mechanical Turk
  • Jie Yang, Carlo van der Valk, Tobias Hossfeld, Judith Redi and Alessandro Bozzon
Permutation-Invariant Consensus over Crowdsourced Labels
  • Michael Giancola, Randy Paffenroth and Jacob Whitehill
Plexiglass: Multiplexing Passive and Active Tasks for More Efficient Crowdsourcing
  • Akshay Rao, Harmanpreet Kaur and Walter Lasecki
Skill-and-Stress-Aware Assignment of Crowd-Worker Groups to Task Streams
  • Katsumi Kumai, Masaki Matsubara, Yuhki Shiraishi, Daisuke Wakatsuki, Jianwei Zhang, Takeaki Shionome, Hiroyuki Kitagawa and Atsuyuki Morishima
Social Cues, Social Biases: Stereotypes in Annotations on People Images
  • Jahna Otterbacher
Striving to Earn More: Strategies and Tool Use Among Crowd Workers
  • Toni Kaplan, Susumu Saito, Kotaro Hara and Jeffrey Bigham
The Role of Novelty in Securing Investors for Equity Crowdfunding Campaigns
  • Emoke-Agnes Horvat, Johannes Wachs, Rong Wang and Aniko Hannak
Towards Accountable AI: Hybrid Human-Machine Analyses for Characterizing System Failure
  • Besmira Nushi, Ece Kamar and Eric Horvitz
Towards Quantifying Behaviour in Social Crowdsourcing Communities
  • Khobaib Zaamout and Ken Barker
Usability of Humanly Computable Passwords
  • Samira Samadi, Adam Kalai and Santosh Vempala
Utilizing Crowdsourced Asynchronous Chat for Efficient Collection of Dialogue Dataset
  • Kazushi Ikeda and Keiichiro Hoashi
Verifying Conceptual Domain Models with Human Computation: A Case Study in Software Engineering
  • Marta Sabou, Dietmar Winkler, Stefan Biffl and Peter Penzenstadler
Visual Question Answer Diversity
  • Chun-Ju Yang, Kristen Grauman and Danna Gurari
WingIt: Efficient refinement of unclear task instructions
  • V. K. Chaithanya Manam and Alexander J. Quinn
Your Behavior Signals Your Reliability: Modeling Crowd Behavioral Traces to Ensure Quality Relevance Annotations
  • Tanya Goyal, Tyler McDonnell, Mucahid Kutlu, Tamer Elsayed and Matthew Lease

Works-in-Progress & Demonstrations

Accepted works-in-Progress & Demonstrations will be presented as posters at the conference.

A Study of Narrative Creation by Means of Crowds and Niches
  • Oana Inel, Sabrina Sauer and Lora Aroyo
Are 1,000 Features Worth A Picture? Combining Crowdsourcing and Face Recognition to Identify Civil War Soldiers
  • Vikram Mohanty, David Thames and Kurt Luther
But Who Protects the Moderators? The Case of Crowdsourced Image Moderation
  • Brandon Dang, Martin Johannes Riedl and Matt Lease
Collective Story Writing through Linking Images
  • Auroshikha Mandal, Mehul Agarwal and Malay Bhattacharyya
Crowd-sourced knowledge graph extension: a belief revision based approach
  • Artem Revenko, Marta Sabou, Albin Ahmeti and Martin Schauer
Crowdsourcing for Reminiscence Chatbot Design
  • Svetlana Nikitina, Florian Daniel, Marcos Baez, Fabio Casati and Georgy Kopanitsa
Effects of Algorithmic Decision-Making and Interpretability on Human Behavior: Experiments using Crowdsourcing
  • Avishek Anand, Kilian Bizer, Alexander Erlei, Ujwal Gadiraju, Christian Heinze, Lukas Meub, Wolfgang Nejdl and Bjoern Steinroetter
Imitation Learning on Atari using Non-Expert Human Annotations
  • Ameya Panse, Tushar Madheshia, Anand Sriraman and Shirish Karande
Leveraging Crowd and AI to Support the Search Phase of Literature Reviews
  • Evgeny Krivosheev, Jorge Ramírez, Marcos Baez, Fabio Casati and Boualem Benatallah
Moving Disambiguation of Regulations from the Cathedral to the Bazaar
  • Manasi Patwardhan, Richa Sharma, Abhishek Sainani, Shirish Karande and Smita Ghaisas
Navigating Uncertainty in Equity Crowdfunding
  • Flemming Binderup Gammelgaard and Claus Bossen
On Localizing Keywords in Continuous Speech using Mismatched Crowd
  • Purushotam Radadia, Tushar Madhesia, Kanika Kalra, Anand Sriraman, Manasi Patwardhan and Shirish Karande
Quality Evaluation Methods for Crowdsourced Image Segmentation
  • Doris Jung-Lin Lee, Akash Das Sarma and Aditya Parameswaran
Synthetic Accessibility Assessment Using Auxiliary Responses
  • Shun Ito, Yukino Baba, Tetsu Isomura and Hisashi Kashima
Towards Crowdsourcing Clickbait on YouTube
  • Jiani Qu, Anny Marleen Hißbach, Tim Gollub and Martin Potthast