Papers

A Human-Centric Perspective on Model Monitoring
  • Murtuza Shergadwala, Himabindu Lakkaraju, and Krishnaram Kenthapadi
Allocation Schemes in Analytic Evaluation: Applicant-Centric Holistic or Attribute-Centric Segmented?
  • Jingyan Wang, Carmel Baharav, Nihar Shah, Anita Woolley, and R Ravi
CHIME: Causal Human-in-the-Loop Model Explanations
  • Shreyan Biswas, Lorenzo Corti, Stefan Buijsman, and Jie Yang
Comparing Experts and Novices for AI Data Work: Insights on Allocating Human Intelligence to Design a Conversational Agent
  • Lu Sun, Yuhan Liu, Grace Joseph, Zhou Yu, Haiyi Zhu, and Steven P. Dow
Connecting Algorithmic Research and Usage Contexts: A Perspective of Contextualized Evaluation for Explainable AI
  • Q. Vera Liao, Yunfeng Zhang, Ronny Luss, Finale Doshi-Velez, and Amit Dhurandhar
Crowdsourcing Perceptions of Gerrymandering
  • Benjamin Kelly, Inwon Kang, and Lirong Xia
Eliciting and Learning with Soft Labels from Every Annotator
  • Katherine Collins, Umang Bhatt, and Adrian Weller
Gesticulate for Health’s Sake! Understanding the Use of Gestures as an Input Modality for Microtask Crowdsourcing
  • Garrett Allen, Andrea Hu, and Ujwal Gadiraju
Goal-Setting Behavior of Workers on Crowdsourcing Platforms: An Exploratory Study on MTurk and Prolific
  • Tahir Abbas and Ujwal Gadiraju
HSI: Human Saliency Imitator for Benchmarking Saliency-Based Model Explanations
  • Yi Yang, Yueyuan Zheng, Didan Deng, Jindi Zhang, Yongxiang Huang, Yumeng Yang, Janet H. Hsiao, and Caleb Chen Cao
Identifying Possible Winners in Ranked Choice Voting Elections with Outstanding Ballots
  • Alborz Jelvani and Amelie Marian
It Is like Finding a Polar Bear in the Savannah! Concept-Level AI Explanations with Analogical Inference from Commonsense Knowledge
  • Gaole He, Agathe Balayn, Stefan Buijsman, Jie Yang, and Ujwal Gadiraju
Near-Optimal Reviewer Splitting in Two-Phase Paper Reviewing and Conference Experiment Design
  • Steven Jecmen, Hanrui Zhang, Ryan Liu, Fei Fang, Vincent Conitzer, and Nihar Shah
Performance of Paid and Volunteer Image Labeling in Citizen Science — A Retrospective Analysis
  • Kutub Gandhi, Sofia Eleni Spatharioti, Scott Eustis, Sara Wylie, and Seth Cooper
SignUpCrowd: Using Sign-Language as an Input Modality for Microtask Crowdsourcing
  • Aayush Singh, Sebastian Wehkamp, and Ujwal Gadiraju
Strategyproofing Peer Assessment via Partitioning: The Price in Terms of Evaluators’ Expertise
  • Komal Dhull, Steven Jecmen, Pravesh Kothari, and Nihar Shah
Taking Advice from (Dis)Similar Machines: The Impact of Human-Machine Similarity on Machine-Assisted Decision-Making
  • Nina Grgić-Hlača, Claude Castelluccia, and Krishna P. Gummadi
TaskLint: Automated Detection of Ambiguities in Task Instructions
  • V. K. Chaithanya Manam, Joseph Thomas, and Alexander J. Quinn
When More Data Lead Us Astray: Active Data Acquisition in the Presence of Label Bias
  • Yunyi Li, Maria De-Arteaga, and Maytal Saar-Tsechansky
“If It Didn’t Happen, Why Would I Change My Decision?”: How Judges Respond to Counterfactual Explanations for the Public Safety Assessment
  • Yaniv Yacoby, Ben Green, Christopher L. Griffin Jr., and Finale Doshi-Velez

Doctoral Consortium

Beyond Noisy Labels: Investigating the Impact of Label Bias for Active Label Acquisition
  • Yunyi Li
Crowdsourcing Image Datasets: An Examination of Ground-Truth in Labeling, Text Segmentation, & Sampling Bias
  • Matthew Swindall
Design at Scale for large-scale online communication and collaboration
  • Kehua Lei
From the Lab to the Crowd: Hybrid Human-Machine Systems to Support Novice Data Workers in Data Quality Tasks
  • Shaochen Yu
Making Peer Review Robust to Undesirable Behavior
  • Steven Jecmen
Organizing Crowds to Detect Manipulative Content
  • Claudia Flores-Saviaga
Supporting Multi-device Work Practices in Crowdwork
  • Senjuti Dutta
What the Tech? Understanding How Technology Supports Learning In Citizen Science
  • Holly Rosser

Works-in-Progress and Demonstrations

Group A

A Quantum Inspired Genetic Algorithm for Weighted Constrained Crowd Judgement Analysis
  • Suraj Mandal, Sujoy Chatterjee and Anirban Mukhopadhyay
Aggregating Crowd Intelligence over Open Source Information: An Inference Rule Centric Approach
  • Kai Kida, Hiroyoshi Ito, Masaki Matsubara, Nobutaka Suzuki and Atsuyuki Morishima
Clustering and Evaluating Without Knowing How To: A Case Study of Fashion Datasets
  • Daniil Likhobaba, Daniil Fedulov and Dmitry Ustalov
Common Law Annotations: Investigating the Stability of Dialog Annotations
  • Seunggun Lee, Alexandra DeLucia, Ryan Guan, Rubing Li, Nikita Nangia, Shalaka Vaidya, Lining Zhang, Zijun Yuan, Praneeth Ganedi, Britney Ngaw, Aditya Singhal and João Sedoc
Exploring the Impact of Sub-Task Inter-Dependency on Crowdsourced Event Annotation
  • Tianyi Li, Ping Wang, Tian Shi and Andrey Esakia
Searching for Structure in Unfalsifiable Claims
  • Peter Ebert Christensen, Frederik Warburg, Menglin Jia and Serge Belongie
Too Slow to Be Useful? On Incorporating Humans in the Loop of Smart Speakers
  • Shih-Hong Huang, Chieh-Yang Huang, Yuxin Deng, Hua Shen, Szu-Chi Kuan and Ting-Hao Kenneth Huang
Toward Human-in-the-Loop AI fairness with Crowdsourcing: Effects of Crowdworkers' Characteristics and Fairness Metrics on AI Fairness Perception
  • Yuri Nakao

Group B

A Virtual World Game For Natural Language Annotation
  • Chris Madge, Fatima Althani, Jussi Brightmore, Doruk Kicikoglu, Richard Bartle, Jon Chamberlain, Udo Kruschwitz and Massimo Poesio
Crowdsourcing-based Feedback Analysis on Educational Management
  • Mahuya Soe, Sujoy Chatterjee and Anirban Mukhopadhyay
Data-Scanner-4C: Format Inconsistency Identification by Means of Crowdsourcing
  • Shaochen Yu, Lei Han, Marta Indulska, Shazia Sadiq and Gianluca Demartini
Human-in-the-loop mixup
  • Katherine Collins, Umang Bhatt, Weiyang Liu, Bradley Love and Adrian Weller
Mia: A Web Platform for Mixed-Initiative Annotation
  • Jarvis Tse, Alex C. Williams, Joslin Goh, Timothy Player, James H. Brusuelas and Edith Law
Multi-Armed Bandit Approach to Task Assignment across Multi Crowdsourcing Platforms
  • Yunyi Xiao, Yu Yamashita, Hiroyoshi Ito, Masaki Matsubara and Atsuyuki Morishima
When Crowd Meets Persona: Creating a Large-Scale Open-Domain Persona Dialogue Corpus
  • Won Ik Cho, Yoon Kyung Lee, Seoyeon Bae, Jihwan Kim, Sangah Park, Moosung Kim, Sowon Hahn and Nam Soo Kim
Worker Qualifications for Image-Aesthetic-Assessment Tasks in Crowdsourcing
  • Yudai Kato, Marie Katsurai and Keishi Tajima
  • Stay CONNECTED: HCOMP COMMUNITY

We welcome everyone who is interested in crowdsourcing and human computation to:

  • Join crowd-hcomp Google Group (mailing list) to post and receive crowdsourcing and human computation email announcements (e.g., calls-for-papers, job openings, etc.) including updates here about the conference. To subscribe send an email to crowd-hcomp+subscribe@googlegroups.com.
  • Check our Google Group webpage to view the archive of past communications on the HCOMP mailing list.
  • Keep track of our twitter hashtag #HCOMP2024.
  • Join the HCOMP Slack Community to be in touch with researchers, industry players, practitioners, and crowd workers around Human Computation and relevant topics.