Papers

''Hi. I'm Molly, your virtual interviewer!'' Exploring the Impact of Race and Gender in AI-powered Virtual Interview Experiences
  • Shreyan Biswas, Ji-Youn Jung, Abhishek Unnam, Kuldeep Yadav, Shreyansh Gupta and Ujwal Gadiraju
An Exploratory Study of the Impact of Tasks Selection Strategies on Worker Performance in Crowdsourcing Microtasks
  • Huda Banuqitah, Mark Dunlop, Maysoon Abulkhair and Sotirios Terzis
Assessing Educational Quality: Comparative Analysis of Crowdsourced, Expert, and AI-Driven Rubric Applications
  • Steven Moore, Norman Bier and John Stamper
Combining Human and AI Strengths in Object Counting under Information Asymmetry
  • Songyu Liu and Mark Steyvers
Disclosures & Disclaimers: Investigating the Impact of Transparency Disclosures and Reliability Disclaimers on Learner-LLM Interactions
  • Jessica Bo, Harsh Kumar, Michael Liut and Ashton Anderson
Estimating Contribution Quality in Online Deliberations Using a Large Language Model
  • Lodewijk Gelauff, Mohak Goyal, Bhargav Dindukurthi, Ashish Goel and Alice Siu
Investigating What Factors Influence Users' Rating of Harmful Algorithmic Bias and Discrimination
  • Sara Kingsley, Jiayin Zhi, Wesley Hanwen Deng, Jaimie Lee, Sizhe Zhang, Motahhare Eslami, Kenneth Holstein, Jason I. Hong, Tianshi Li and Hong Shen
Mix and Match: Characterizing Heterogeneous Human Behavior in AI-assisted Decision Making
  • Zhuoran Lu, Hasan Amin, Zhuoyan Li and Ming Yin
Predicting and Understanding Human Action Decisions: Insights from Large Language Models and Cognitive Instance-Based Learning
  • Thuy Ngoc Nguyen, Kasturi Jamale and Cleotilde Gonzalez
The Atlas of AI Risks: Enhancing Public Understanding of AI Risks
  • Edyta Bogucka, Sanja Scepanovic and Daniele Quercia
Toward Context-aware Privacy Enhancing Technologies for Online Self-disclosure
  • Tingting Du, Jiyoon Kim, Anna Squicciarini and Sarah Rajtmajer
Unveiling the Inter-Related Preferences of Crowdworkers: Implications for Personalized and Flexible Platform Design
  • Senjuti Dutta, Rhema Linder, Alex Williams, Anastasia Kuzminykh and Scott Ruoti
User Profiling in Human-AI Design: An Empirical Case Study of Anchoring Bias, Individual Differences, and AI Attitudes
  • Mahsan Nourani, Amal Hashky and Eric Ragan
Utility-Oriented Knowledge Graph Accuracy Estimation with Limited Annotations: A Case Study on DBpedia
  • Stefano Marchesin, Gianmaria Silvello and Omar Alonso
  • Stay CONNECTED: HCOMP COMMUNITY

We welcome everyone who is interested in crowdsourcing and human computation to:

  • Join crowd-hcomp Google Group (mailing list) to post and receive crowdsourcing and human computation email announcements (e.g., calls-for-papers, job openings, etc.) including updates here about the conference. To subscribe send an email to crowd-hcomp+subscribe@googlegroups.com.
  • Check our Google Group webpage to view the archive of past communications on the HCOMP mailing list.
  • Keep track of our twitter hashtag #HCOMP2024.
  • Join the HCOMP Slack Community to be in touch with researchers, industry players, practitioners, and crowd workers around Human Computation and relevant topics.