Papers

A Human-Centric Perspective on Model Monitoring
  • Murtuza Shergadwala, Himabindu Lakkaraju, and Krishnaram Kenthapadi
Allocation Schemes in Analytic Evaluation: Applicant-Centric Holistic or Attribute-Centric Segmented?
  • Jingyan Wang, Carmel Baharav, Nihar Shah, Anita Woolley, and R Ravi
CHIME: Causal Human-in-the-Loop Model Explanations
  • Shreyan Biswas, Lorenzo Corti, Stefan Buijsman, and Jie Yang
Comparing Experts and Crowds for AI Data Work: Allocating Human Intelligence to Design a Conversational Agent
  • Lu Sun, Yuhan Liu, Grace Joseph, Yu Zhou, Haiyi Zhu, and Steven P. Dow
Connecting Algorithmic Research and Usage Contexts: A Perspective of Contextualized Evaluation for Explainable AI
  • Q. Vera Liao, Yunfeng Zhang, Ronny Luss, Finale Doshi-Velez, and Amit Dhurandhar
Crowdsourcing Perceptions of Gerrymandering
  • Benjamin Kelly, Inwon Kang, and Lirong Xia
Eliciting and Learning with Soft Labels from Every Annotator
  • Katherine Collins, Umang Bhatt, and Adrian Weller
Gesticulate for Health’s Sake! Understanding the Use of Gestures as an Input Modality for Microtask Crowdsourcing
  • Garrett Allen, Andrea Hu, and Ujwal Gadiraju
Goal-Setting Behavior of Workers on Crowdsourcing Platforms: An Exploratory Study on MTurk and Prolific
  • Tahir Abbas and Ujwal Gadiraju
HSI: Human Saliency Imitator for Benchmarking Saliency-Based Model Explanations
  • Yi Yang, Yueyuan Zheng, Didan Deng, Jindi Zhang, Yongxiang Huang, Yumeng Yang, Janet H. Hsiao, and Caleb Chen Cao
Identifying Possible Winners in Ranked Choice Voting Elections with Outstanding Ballots
  • Alborz Jelvani and Amelie Marian
It Is like Finding a Polar Bear in the Savannah! Concept-Level AI Explanations with Analogical Inference from Commonsense Knowledge
  • Gaole He, Agathe Balayn, Stefan Buijsman, Jie Yang, and Ujwal Gadiraju
Near-Optimal Reviewer Splitting in Two-Phase Paper Reviewing and Conference Experiment Design
  • Steven Jecmen, Hanrui Zhang, Ryan Liu, Fei Fang, Vincent Conitzer, and Nihar Shah
Performance of Paid and Volunteer Image Labeling in Citizen Science — A Retrospective Analysis
  • Kutub Gandhi, Sofia Eleni Spatharioti, Scott Eustis, Sara Wylie, and Seth Cooper
SignUpCrowd: Using Sign-Language as an Input Modality for Microtask Crowdsourcing
  • Aayush Singh, Sebastian Wehkamp, and Ujwal Gadiraju
Strategyproofing Peer Assessment via Partitioning: The Price in Terms of Evaluators’ Expertise
  • Komal Dhull, Steven Jecmen, Pravesh Kothari, and Nihar Shah
Taking Advice from (Dis)Similar Machines: The Impact of Human-Machine Similarity on Machine-Assisted Decision-Making
  • Nina Grgić-Hlača, Claude Castelluccia, and Krishna P. Gummadi
TaskLint: Automated Detection of Ambiguities in Task Instructions
  • V. K. Chaithanya Manam, Joseph Thomas, and Alexander J. Quinn
When More Data Lead Us Astray: Active Data Acquisition in the Presence of Label Bias
  • Yunyi Li, Maria De-Arteaga, and Maytal Saar-Tsechansky
“If It Didn’t Happen, Why Would I Change My Decision?”: How Judges Respond to Counterfactual Explanations for the Public Safety Assessment
  • Yaniv Yacoby, Ben Green, Christopher L. Griffin Jr., and Finale Doshi-Velez
  • Stay CONNECTED: HCOMP COMMUNITY

We welcome everyone who is interested in crowdsourcing and human computation to:

  • Join crowd-hcomp Google Group (mailing list) to post and receive crowdsourcing and human computation email announcements (e.g., calls-for-papers, job openings, etc.) including updates here about the conference. To subscribe send an email to crowd-hcomp+subscribe@googlegroups.com.
  • Check our Google Group webpage to view the archive of past communications on the HCOMP mailing list.
  • Visit the HCOMP blog where we post about new ideas for ideas related to crowd and social computing research.
  • Keep track of our twitter hashtag #HCOMP2022.
  • Join the HCOMP Slack Community to be in touch with researchers, industry players, practitioners, and crowd workers around Human Computation and relevant topics.