Accepted Papers

  • Tracks

Accepted papers are grouped by track, with a link to each track group provided below.

Full Papers

A Hybrid Approach to Identifying Unknown Unknowns of Predictive Models
  • Colin Vandenhof
A Large-Scale Study of the "Wisdom of Crowds'"
  • Camelia Simoiu, Chiraag Sumanth, Alok Shankar and Sharad Goel
AI-based Request Augmentation to Increase Crowdsourcing Participation
  • Ranjay Krishna, Li Fei-Fei, Michael Bernstein, Pranav Khadpe and Junwon Park
Beyond Accuracy: On the Role of Mental Models in Human-AI Teams
  • Gagan Bansal, Besmira Nushi, Ece Kamar, Daniel Weld, Walter Lasecki and Eric Horvitz
Can You Explain That? Lucid Explanations Help Human-AI Collaborative Image Retrieval
  • Arijit Ray, Yi Yao, Rakesh Kumar, Ajay Divakaran and Giedrius Burachas
Crowdsourced PAC Learning under Classification Noise
  • Shelby Heinecke and Lev Reyzin
Fair Work: Crowd Work Minimum Wage with One Line of Code
  • Mark Whiting, Grant Hugh and Michael Bernstein
Gamification of Loop-Invariant Discovery from Code
  • Andrew Walter, Benjamin Boskin, Seth Cooper and Panagiotis Manolios
Going Against the (Appropriate) Flow: A Contextual Integrity Approach to Privacy Policy Analysis
  • Yan Shvartzshnaider, Noah Apthorpe, Nick Feamster and Helen Nissenbaum.
How Do We Talk about Other People? Group (Un)Fairness in Natural Language Image Descriptions
  • Jahna Otterbacher, Pınar Barlas, Styliani Kleanthous and Kyriakos Kyriakou
Human Evaluation of Models Built for Interpretability
  • Isaac Lage, Emily Chen, Jeffrey He, Menaka Narayanan, Been Kim, Samuel Gershman and Finale Doshi-Velez
Interpretable Image Recognition with Hierarchical Prototypes
  • Peter Hase, Chaofan Chen, Oscar Li and Cynthia Rudin
Learning to Predict Population-Level Label Distributions
  • Tong Liu, Pratik Sanjay Bongale, Akash Venkatachalam and Christopher Homan
Not Everyone Can Write Great Examples But Great Examples Can Come From Anywhere
  • Shayan Doroudi, Ece Kamar and Emma Brunskill
Platform-related Factors in Repeatability and Reproducibility of Crowdsourcing Tasks
  • Rehab Qarout, Alessandro Checco, Gianluca Demartini and Kalina Bontcheva
Progression In A Language Annotation Game With A Purpose
  • Chris Madge, Juntao Yu, Jon Chamberlain, Udo Kruschwitz, Silviu Paun and Massimo Poesio
Second Opinion: Supporting last-mile person identification with crowdsourcing and face recognition
  • Vikram Mohanty, Kareem Abdol-Hamid, Courtney Ebersohl and Kurt Luther
Testing Stylistic Interventions to Reduce Emotional Impact of Content Moderation Workers
  • Sowmya Karunakaran and Rashmi Ramakrishnan
The Effects of Meaningful and Meaningless Explanations on Trust and Perceived System Accuracy in Intelligent Systems
  • Mahsan Nourani, Samia Kabir, Sina Mohseni and Eric Ragan
Understanding the Impact of Text Highlighting in Crowdsourcing Tasks
  • Jorge Ramirez, Marcos Baez, Fabio Casati and Boualem Benatallah
What You See is What You Get? The Impact of Representation Criteria on Human Bias in Hiring
  • Andi Peng, Besmira Nushi, Emre Kiciman, Kori Inkpen, Siddharth Suri and Ece Kamar
Who is in Your Top Three? Optimizing Learning in Elections with Many Candidates
  • Nikhil Garg, Lodewijk Gelauff, Sukolsak Sakshuwong and Ashish Goel

Works-in-Progress & Demonstrations

Works-in-Progress

Accepted works-in-progress will be presented as posters at the conference.

"It's QuizTime!": A Study of Online Verification Practices on Twitter
  • Sukrit Venkatagiri, Jacob Thebault-Spieker, Sarwat Kazmi, Efua Akonor and Kurt Luther
A Metrological Framework for Evaluating Crowd-powered Instruments
  • Chris Welty, Lora Aroyo and Praveen Paritosh
Adaptive Query Processing with the Crowd: Joins and the Joinable Filter
  • Han Maw Aung, Cienn Givens, Amber Kampen, Rebecca Qin and Beth Trushkowsky
Citizen Scientist Amplification
  • Darryl Wright, Lucy Fortson, Chris Lintott and Mike Walmsley
Discovering Biases in Image Datasets with the Crowd
  • Xiao Hu, Haobo Wang, Somesh Due, Anirudh Vegesana, Kaiwen Yu, Yung-Hsiang Lu and Ming Yin
Fair Payments in Adaptive Voting
  • Margarita Boyarskaya and Panos Ipeirotis
Human-in-the-Loop Selection of Optimal Time Series Anomaly Detection Methods
  • Cynthia Freeman and Ian Beaver
Improving Reproducibility of Crowdsourcing Experiments
  • Kohta Katsuno, Masaki Matsubara, Chiemi Watanabe and Atsuyuki Morishima
Lexical Learning as an Online Optimal Experiment: Building Efficient Search Engines through Human-Machine Collaboration
  • Jacopo Tagliabue and Reuben Cohn-Gordon
On Task Decision Processes based on Discounted Satisficing Heuristic Employed by Crowd Workers
  • Mounica Devaguptapu and Venkat Sriram Siddhardh Nadendla
Promoting Learning and Engagement in Human Computing Games: a Study of Educational Material Formats
  • Rogerio de Leon Pereira, Andrea Bunt and Olivier Tremblay-Savard
Understanding Conversational Style in Conversational Microtask Crowdsourcing
  • Sihang Qiu, Ujwal Gadiraju and Alessandro Bozzon
Demonstrations

Accepted demonstrations will be presented as a working system and/or poster at the conference.

Building Human Computation Space on the WWW: Labeling Web Contents through Web Browsers
  • Yoshinari Shirai, Yasue Kishino, Yutaka Yanagisawa, Shin Mizutani and Takayuki Suyama
CrowdHub: Extending crowdsourcing platforms for the controlled evaluation of tasks designs
  • Jorge Ramírez, Simone Degiacomi, Davide Zanella, Marcos Baez, Fabio Casati and Boualem Benatallah
Crowdsource by Google: A Platform for Collecting Inclusive and Representative Machine Learning Data
  • Supheakmungkol Sarin, Knot Pipatsrisawat, Khiem Pham, Anurag Batra and Luís Valente
Deliberative Democracy with the Online Deliberation Platform
  • James Fishkin, Nikhil Garg, Lodewijk Gelauff, Ashish Goel, Sukolsak Sakshuwong, Alice Siu, Kamesh Munagala and Sravya Yandamuri
Fair Work: Crowd Work Minimum Wage with One Line of Code
  • Mark Whiting, Grant Hugh and Michael Bernstein
PairWise: Mitigating Political Bias in Crowdsourced Content Moderation
  • Jacob Thebault-Spieker, Sukrit Venkatagiri, David Mitchell, Chris Hurt and Kurt Luther
The Impact of Visual Representations in a Machine Teaching Interaction
  • Claudel Rheault, Lorne Schell, Doriane Soulas, Chris Tyler, Tara Tressel and Masha Krol
Understandable Microtasks With No Visual Representation
  • Ying Zhong, Masaki Matsubara, Makoto Kobayashi and Atsuyuki Morishima
  • Stay CONNECTED: HCOMP COMMUNITY

We welcome everyone who is interested in crowdsourcing and human computation to:

  • Join crowd-hcomp Google Group (mailing list) to post and receive crowdsourcing and human computation email announcements (e.g., calls-for-papers, job openings, etc.) including updates here about the conference. To subscribe send an email to crowd-hcomp+subscribe@googlegroups.com.
  • Check our Google Group webpage to view the archive of past communications on the HCOMP mailing list.
  • Keep track of our twitter hashtag #HCOMP2024.
  • Join the HCOMP Slack Community to be in touch with researchers, industry players, practitioners, and crowd workers around Human Computation and relevant topics.