Accepted Papers

  • Tracks

Accepted papers are grouped by track, with a link to each track group provided below.

Proceedings

View Conference Proceedings

Full Papers

Analyzing Workers Performance in Online Mapping Tasks Across Web, Mobile, and Virtual Reality Platforms
  • Gerard van Alphen, Sihang Qiu, Alessandro Bozzon and Geert-Jan Houben
CrowDEA: Multi-view Idea Prioritization with Crowds
  • Yukino Baba, Jiyi Li and Hisashi Kashima
Effective Operator Summaries Extraction
  • Ido Nimni and David Sarne
Enhancing Collective Estimates by Aggregating Cardinal and Ordinal Inputs
  • Ryan Kemmer, Yeawon Yoo, Adolfo Escobedo and Ross Maciejewski
Fast, Accurate, and Healthier: Interactive Blurring Helps Moderators Reduce Exposure to Harmful Content
  • Anubrata Das, Brandon Uyvu Dang and Matthew Lease
How Context Influences Cross-Device Task Acceptance in Crowd Work
  • Danula Hettiachchi, Senuri Wijenayake, Simo Hosio, Vassilis Kostakos and Jorge Goncalves
Impact of Algorithmic Decision Making on Human Behavior: Evidence from Ultimatum Bargaining
  • Alexander Erlei, Franck Awounang Nekdem, Lukas Meub, Avishek Anand and Ujwal Gadiraju
Motivating Novice Crowd Workers through Goal Setting. An Investigation into the Effects on Complex Crowdsourcing Task Training
  • Amy Rechkemmer and Ming Yin
Predicting Crowdworkers' Performance as Human-Sensors for Robot Navigation
  • Nir Machlev and David Sarne
Privacy-Preserving Face Redaction using Crowdsourcing
  • Abdullah Alshaibani, Sylvia Carrell, Li-Hsin Tseng, Jungmin Shin and Alexander Quinn
Soliciting Human-in-the-Loop User Feedback for Interactive Machine Learning Reduces User Trust and Impressions of Model Accuracy
  • Donald Honeycutt, Mahsan Nourani and Eric Ragan
The Role of Domain Expertise in User Trust and the Impact of First Impressions with Intelligent Systems
  • Mahsan Nourani, Joanie T. King and Eric Ragan
Trainbot: A Conversational Interface to Train Crowd Workers for Delivering On-Demand Therapy
  • Tahir Abbas, Vassilis-Javed Khan, Ujwal Gadiraju and Panos Markopoulos
Understanding the Effects of Explanation Types and User Motivations on Movie Recommender System Use
  • Qing Li, Sharon Lynn Chu, Nanjie Rao and Mahsan Nourani
Verifying Extended Entity Relationship Diagrams with Open Tasks
  • Marta Sabou, Klemens Käsznar, Dietmar Winkler, Markus Zlabinger and Stefan Biffl

Short Papers

A Case for Soft Loss Functions
  • Alexandra Uma, Tommaso Fornaciari, Dirk Hovy, Silviu Paun, Barbara Plank and Massimo Poesio
Batch Prioritization of Data Labeling Tasks for Training Classifiers
  • Masanari Kimura, Kei Wakabayashi and Atsuyuki Morishima
Does Exposure to Diverse Perspectives Mitigate Biases in Crowdwork? An Explorative Study
  • Xiaoni Duan, Chien-Ju Ho and Ming Yin
How Useful Are the Machine-Generated Interpretations? A Human Evaluation on Guessing the Wrongly Predicted Labels
  • Hua Shen and Ting-Hao 'Kenneth' Huang
Modeling Annotator Perspective and Polarized Opinions to Improve Hate Speech Detection
  • Sohail Akhtar, Valerio Basile and Viviana Patti
Schema and Metadata Guide the Collective Generation of Relevant and Diverse Work
  • Xiaotong Tone Xu, Judith Fan and Steven Dow
The Challenges of Crowd Workers in Rural and Urban America
  • Claudia Flores-Saviaga, Yuwen Li, Benjamin Hanrahan, Jeffrey Bigham and Saiph Savage

Doctoral Consortium

Accounting for Human Factors in Machine-Assisted Decision-Making
  • Nina Grgic-Hlaca
Feature-Engineering from Meta-Cognitive Data for Machine-Learning based Decision-Aggregation
  • Hilla Shinitzky
Improving Crowd Working Conditions with A Novel Cooperation System
  • Haoyu Xie
Labour as a Commodity: Social Reproduction in the Development of Artificial Intelligence
  • Julian Posada
The Role of Energy Efficiency Smart Buildings in the Transformation of the Energy System: a Data Science Analysis from Social and Technical Perspectives
  • Jiao Jiao

Works-in-Progress & Demonstration

Works-in-Progress
A Survey of Visually Impaired Workers in Japanese and US Crowdsourcing Platforms
  • Ying Zhong, Makoto Kobayashi, Masaki Matsubara, Atsuyuki Morishima
A TOPSIS-based Multi-Objective Model for Constrained Crowd Judgment Analysis
  • Sujoy Chatterjee, Sunghoon Lim
Assessing Political Bias using Crowdsourced Pairwise Comparisons
  • Tzu-Sheng Kuo, Mcardle Hankin, Miranda Li, Andrew Ying, Cathy Wang
Crowdwork as a Snapshot in Time: Image Annotation Tasks during a Pandemic
  • Evgenia Christoforou, Pinar Barlas, Jahna Otterbacher
Dynamic Worker-Task Assignment for High-Quality Task Results with ML Workers
  • Yu Yamashita, Masaki Kobayashi, Kei Wakabayashi, Atsuyuki Morishima
Exploring Effectiveness of Inter-Microtask Qualification Tests in Crowdsourcing
  • Masaya Morinaga, Susumu Saito, Teppei Nakano, Tetsunori Kobayashi, Tetsuji Ogawa
Qualification Labour: A Fair Wage Isn’t Enough if Workers Need to Do 5,000 Low Paid Tasks to Qualify for Your Task
  • Jonathan Kummerfeld
Socially Augmented Crowdsourced Collection of Folk Theories
  • Jonas Oppenlaender
Towards a Crowdsourcing Platform for Low Resource Languages -- A Collectivist Approach
  • Sarah Luger, Christopher M. Homan, Allashera Tapo
Turkish Judge: A Peer Evaluation Framework for Crowd Work Appeals
  • Mukund Venkateswaran, Edward Cohen, Nivedita Sankar, Chris Callison-Burch
Demonstrations
OpenTag: Understanding Human Perceptions of Image Tagging Algorithms
  • Kyriakos Kyriakou, Pınar Barlas, Styliani Kleanthous, Jahna Otterbacher

Blue Sky Ideas Papers

A Technology-Assisted Social Computing Framework for Solving Complex Social Problems
  • Kevin Kells
COGNET: The Planetary Cognition Delivery Network
  • Simo Hosio, Niels van Berkel
Group-Assign: Type Theoretic Framework for Human AI Orchestration
  • Aik Beng Ng, Zhangsheng Lai, Simon See, Shaowei Lin
Using Human Cognitive Limitations to Enable New Systems
  • Vincent Conitzer
  • Stay CONNECTED: HCOMP COMMUNITY

We welcome everyone who is interested in crowdsourcing and human computation to:

  • Join crowd-hcomp Google Group (mailing list) to post and receive crowdsourcing and human computation email announcements (e.g., calls-for-papers, job openings, etc.) including updates here about the conference. To subscribe send an email to crowd-hcomp+subscribe@googlegroups.com.
  • Check our Google Group webpage to view the archive of past communications on the HCOMP mailing list.
  • Keep track of our twitter hashtag #HCOMP2024.
  • Join the HCOMP Slack Community to be in touch with researchers, industry players, practitioners, and crowd workers around Human Computation and relevant topics.