Papers

See conference proceeding for HCOMP 2024 here .

''Hi. I'm Molly, your virtual interviewer!'' Exploring the Impact of Race and Gender in AI-powered Virtual Interview Experiences
  • Shreyan Biswas, Ji-Youn Jung, Abhishek Unnam, Kuldeep Yadav, Shreyansh Gupta and Ujwal Gadiraju
An Exploratory Study of the Impact of Tasks Selection Strategies on Worker Performance in Crowdsourcing Microtasks
  • Huda Banuqitah, Mark Dunlop, Maysoon Abulkhair and Sotirios Terzis
Assessing Educational Quality: Comparative Analysis of Crowdsourced, Expert, and AI-Driven Rubric Applications
  • Steven Moore, Norman Bier and John Stamper
Combining Human and AI Strengths in Object Counting under Information Asymmetry
  • Songyu Liu and Mark Steyvers
Disclosures & Disclaimers: Investigating the Impact of Transparency Disclosures and Reliability Disclaimers on Learner-LLM Interactions
  • Jessica Bo, Harsh Kumar, Michael Liut and Ashton Anderson
Estimating Contribution Quality in Online Deliberations Using a Large Language Model
  • Lodewijk Gelauff, Mohak Goyal, Bhargav Dindukurthi, Ashish Goel and Alice Siu
Best Paper Award Investigating What Factors Influence Users' Rating of Harmful Algorithmic Bias and Discrimination
  • Sara Kingsley, Jiayin Zhi, Wesley Hanwen Deng, Jaimie Lee, Sizhe Zhang, Motahhare Eslami, Kenneth Holstein, Jason I. Hong, Tianshi Li and Hong Shen
Mix and Match: Characterizing Heterogeneous Human Behavior in AI-assisted Decision Making
  • Zhuoran Lu, Hasan Amin, Zhuoyan Li and Ming Yin
Predicting and Understanding Human Action Decisions: Insights from Large Language Models and Cognitive Instance-Based Learning
  • Thuy Ngoc Nguyen, Kasturi Jamale and Cleotilde Gonzalez
Best Paper Honorable Mention The Atlas of AI Risks: Enhancing Public Understanding of AI Risks
  • Edyta Bogucka, Sanja Scepanovic and Daniele Quercia
Toward Context-aware Privacy Enhancing Technologies for Online Self-disclosure
  • Tingting Du, Jiyoon Kim, Anna Squicciarini and Sarah Rajtmajer
Unveiling the Inter-Related Preferences of Crowdworkers: Implications for Personalized and Flexible Platform Design
  • Senjuti Dutta, Rhema Linder, Alex Williams, Anastasia Kuzminykh and Scott Ruoti
User Profiling in Human-AI Design: An Empirical Case Study of Anchoring Bias, Individual Differences, and AI Attitudes
  • Mahsan Nourani, Amal Hashky and Eric Ragan
Best Paper Honorable Mention Utility-Oriented Knowledge Graph Accuracy Estimation with Limited Annotations: A Case Study on DBpedia
  • Stefano Marchesin, Gianmaria Silvello and Omar Alonso

Works-in-Progress

AI-assisted Gaze Detection for Proctoring Online Exams
  • Yong-Siang Shih, Zach Zhao, Chenhao Niu, Bruce Iberg, James Sharpnack, and Mirza Basim Baig
AIMindmaps: Enhancing Mind Map Creation Through AI Collaboration and User Input
  • Xinwei Lin, Yuchao Jiang, and Angela Finlayson
An Investigation of Experiences Engaging the Margins in Data-Centric Innovation
  • Gabriella Thompson, Ebtesam Al Haque, Paulette Blanc, Meme Styles, Denae Ford, Angela D. R. Smith, and Brittany Johnson
BlitzMe: A Social Media Platform Combining Smile Recognition and Human Computation for Positive Mood Enhancement
  • Fuyuki Matsubara
Claim Check-worthiness in Podcasts: Challenges and Opportunities for Human-AI Collaboration to Tackle Misinformation
  • Ujwal Gadiraju, Vinay Setty and Stefan Buijsman
Designing a Crowdsourcing Pipeline to Verify Reports from User AI Audits
  • Wang Claire, Wesley Hanwen Deng, Jason Hong, Ken Holstein, and Motahhare Eslami
Discerning Causes of Ratings Bias: A Platform for Bias Experimentation in Ratings-Based Reputation Systems
  • Mehul Maheshwari and Jacob Thebault-Spieker
From Crowdsourcing to Large Multimodal Models: Toward Enhancing Image Data Annotation with GPT-4V
  • Owen He, Ansh Jain, Axel Adonai Rodriguez-Leon, Arnav Taduvayi, and Matthew Louis Mauriello
MIRAGE: Multi-model Interface for Reviewing and Auditing Generative Text-to-Image AI
  • Matheus Kunzler Maldaner, Wesley Hanwen Deng, Jason Hong, Ken Holstein, and Motahhare Eslami
POTATO: The Portable Text Annotation Tool
  • Jiaxin Pei, Aparna Ananthasubramaniam, Xingyao Wang, Naitian Zhou, Apostolos Dedeloudis, Jackson Sargent, and David Jurgens
Simulation-based Exploration for Aggregation Algorithms in Human+AI Crowd: What factors should we consider for better results?
  • Takumi Tamura, Hiroyoshi Ito, Satoshi Oyama, and Atsuyuki Morishima
  • Stay CONNECTED: HCOMP COMMUNITY

We welcome everyone who is interested in crowdsourcing and human computation to:

  • Join crowd-hcomp Google Group (mailing list) to post and receive crowdsourcing and human computation email announcements (e.g., calls-for-papers, job openings, etc.) including updates here about the conference. To subscribe send an email to crowd-hcomp+subscribe@googlegroups.com.
  • Check our Google Group webpage to view the archive of past communications on the HCOMP mailing list.
  • Keep track of our twitter hashtag #HCOMP2024.
  • Join the HCOMP Slack Community to be in touch with researchers, industry players, practitioners, and crowd workers around Human Computation and relevant topics.