Papers

A Cluster-Aware Transfer Learning for Bayesian Optimization of\Personalized Preference Models in the Crowd Setting
  • Haruto Yamasaki, Masaki Matsubara, Hiroyoshi Ito, Yuta Nambu, Masahiro Kohjima, Yuki Kurauchi, Ryuji Yamamoto and Atsuyuki Morishima
A Crowd–AI Collaborative Approach to Address Demographic Bias for Student Performance Prediction in Online Education
  • Ruohan Zong, Yang Zhang, Frank Stinar, Lanyu Shang, Huimin Zeng, Nigel Bosch and Dong Wang
A Taxonomy of Human and ML Strengths in Decision-Making to Investigate Human-ML Complementarity (🏆 Honorable Mention)
  • Charvi Rastogi, Liu Leqi, Kenneth Holstein and Hoda Heidari
A task-interdependency model for complex collaboration towards human-centered crowd work
  • David Lee and Christos Makridis
Accounting for Transfer of Learning using Human Behavior Models
  • Tyler Malloy, Yinuo Du, Fei Fang and Cleotilde Gonzalez
BackTrace: A Human-AI Collaborative Approach to Discovering Studio Backdrops in Historical Photographs
  • Jude Lim, Vikram Mohanty, Terryl Dodson and Kurt Luther
Characterizing Time Spent in Video Object Tracking Annotation Tasks: A Study of Task Complexity in Vehicle Tracking
  • Amy Rechkemmer, Alex Williams, Matthew Lease and Li Erran Li
Collect, Measure, Repeat: Reliability Factors for Responsible AI Data Collection (🏆 Honorable Mention)
  • Oana Inel, Tim Draws and Lora Aroyo
Confidence Contours: Uncertainty-Aware Annotation for Medical Semantic Segmentation (🏆 Best Paper)
  • Andre Ye, Quan Ze Chen and Amy Zhang
Crowdsourced Clustering via Active Querying: Practical Algorithm with Theoretical Guarantees
  • Yi Chen, Ramya Korlakai Vinayak and Babak Hassibi
Does Human Collaboration Enhance the Accuracy of Identifying Deepfake Texts?
  • Adaku Uchendu, Jooyoung Lee, Hua Shen, Thai Le, Ting-Hao Huang and Dongwon Lee
How Crowd Worker Factors Influence Subjective Annotations: A Study of Tagging Misogynistic Hate Speech in Tweets
  • Danula Hettiachchi, Indigo Holcombe-James, Stephanie Livingstone, Anjalee de Silva, Matthew Lease, Flora D. Salim and Mark Sanderson
Humans forgo reward to instill fairness into AI
  • Lauren Treiman, Chien-Ju Ho and Wouter Kool
Informing Users about Data Imputation: Exploring the Design Space for Dealing With Non-Responses
  • Ananya Bhattacharjee, Haochen Song, Xuening Wu, Justice Tomlinson, Mohi Reza, Akmar Ehsan Chowdhury, Nina Deliu, Thomas Price and Joseph Jay Williams
Rethinking Quality Assurance for Crowdsourced Multi-ROI Image Segmentation
  • Xiaolu Lu, David Ratcliffe, Tsu-Ting Kao, Aristarkh Tikhonov, Lester Litchfield, Craig Rodger and Kaier Wang
Selective Concept Models: Permitting Stakeholder Customisation at Test-Time
  • Matthew Barker, Katherine Collins, Krishnamurthy Dvijotham, Adrian Weller and Umang Bhatt
Task as Context: A Sensemaking Perspective on Annotating Inter-Dependent Event Attributes with Non-Experts
  • Tianyi Li, Ping Wang, Tian Shi, Yali Bian and Andrey Esakia
Where Does My Model Underperform? A Human Evaluation of Slice Discovery Algorithms (🏆 Best Paper)
  • Nari Johnson, Ángel Alexander Cabrera, Gregory Plumb and Ameet Talwalkar

Works-in-Progress and Demonstrations

CI

Boosting collective intelligence in medical diagnostics: Leveraging decision similarity as a predictor of accuracy when answers are open-ended rankings
  • Nikolas Zöller, Stefan M. Herzog and Ralf H.J.M. Kurvers
Creative Work, Technology and Occupational Innovation
  • Shiyan Zhang and Jeffrey V. Nickerson
Rapid Think Thanks (RTTs) -- A Collectively-Intelligent Methodology for Collective Deliberation
  • Ezequiel Lopez-Lopez and Ulrike Hahn
Supermind Ideator: Exploring generative AI to support creative problem-solving
  • Steven Rick, Gianni Giacomelli, Jennifer Heyman, Haoran Wen, Robert Laubacher, Nancy Taubenslag, Max Knicker, Younes Jeddi, Pranav Ragupathy and Thomas Malone
Working Harder but not Smarter: Experimental Results on the Effects of Collective Intelligence Awareness
  • Chase McDonald, Thuy Ngoc Nguyen, Carlos Botelho, Christopher Dishop, Anita Woolley and Cleotilde Gonzalez

HCOMP

A Logic-based Microtasking Approach for LLMs and Human Processing
  • Tomoya Kanda, Hiroyoshi Ito, Nobutaka Suzuki and Atsuyuki Morishima
An Empirical Study of Uncertainty in Polygon Annotation and the Impact of Quality Assurance
  • Eric Zimmermann, Justin Szeto and Frederic Ratle
Annotating Aesthetics
  • Céline Offerman, Willem van der Maden and Alessandro Bozzon
Behind the Canvas: A Human-AI Workflow for Tracing 19th-Century Photographers and Studio Backdrops
  • Vikram Mohanty, Jude Lim, Terryl Dodson and Kurt Luther
Bridging the Communication Gap between ML Engineers and Data Enrichment Workers
  • Claudel Rheault, Jerome Pasquero, Patrick Steeves and Karina Pannhasith
Clicks Don’t Lie: Inferring Generative AI Usage in Crowdsourcing through User Input Events
  • Arjun Patel and Phoebe Liu
Conversational Agents for a Deliberative Age
  • Michael Grauwde, Mark Neerincx and Olya Kudina
Conversational Swarm Intelligence, a Pilot Study
  • Louis Rosenberg, Gregg Willcox, Ganesh Mani, Hans Schumann, Miles Bader and Kokoro Sagae
CrowdLit: Crowd-powered Literature Search
  • Zhenyao Cai, Anthony Phonethibsavads and Shayan Doroudi
Investigating Public Acceptance of Shared Autonomous Vehicles: A Crowdsourcing Approach
  • Evgenia Christoforou, Carla Fabiana Chiasserini and Jahna Otterbacher
Knowledge Tracing And Gamified Onboarding to Support (Language) Learning in a Platform With A Purpose for Collecting Linguistic Judgments
  • Chris Madge, Doruk Kicikoglu, Fatima Althani, Richard Bartle, Jon Chamberlain, Udo Kruschwitz and Massimo Poesio
Optimizing Team Formation in Educational Settings
  • Aaron Kessler, Tim Scheiber, Heinz Schmitz and Ioanna Lykourentzou
Rethinking the Design of Innovation Crowdsourcing Competitions: Strategies for Shaping Participation Structure and Maximizing Value Creation
  • Juan Pablo Reyes Ochoa and Emmanuel Peterle
  • Stay CONNECTED: HCOMP COMMUNITY

We welcome everyone who is interested in crowdsourcing and human computation to:

  • Join crowd-hcomp Google Group (mailing list) to post and receive crowdsourcing and human computation email announcements (e.g., calls-for-papers, job openings, etc.) including updates here about the conference. To subscribe send an email to crowd-hcomp+subscribe@googlegroups.com.
  • Check our Google Group webpage to view the archive of past communications on the HCOMP mailing list.
  • Keep track of our twitter hashtag #HCOMP2024.
  • Join the HCOMP Slack Community to be in touch with researchers, industry players, practitioners, and crowd workers around Human Computation and relevant topics.