Program

  • Plenary Panels

Find detailed information about the panelists.

  • Accepted Papers

View a list of accepted papers.

  • Works-in-Progress and Demonstrations

View a list of accepted papers.

  • HCOMP-CI 2023 SCHEDULE

Nov 6 (Monday)
IDE Arena room
Nov 7 (Tuesday)
IDE Arena room
Nov 8 (Wednesday)
IDE Arena room
Nov 9 (Thursday)
Workshop rooms
8:45am-9:15am:

Walk in

Opening Remarks by
HCOMP/CI 2023 Chairs
start at 9:00am
8:45am-9:00am:

Walk in
8:45am-9:00am:

Walk in
8:45am-9:00am:

Workshops, DC, Crowdcamp
9:15am-10:45am:

Panel Session 1: Proccacia, Ugander, Zhang
9:00am-10:30am:

Session 4: Fairness
Chair: Ujwal Gadiraju

Humans forgo reward to instill fairness into AI by Lauren Treiman, Chien-Ju Ho and Wouter Kool

A Crowd–AI Collaborative Approach to Address Demographic Bias for Student Performance Prediction in Online Education by Ruohan Zong, Yang Zhang, Frank Stinar, Lanyu Shang, Huimin Zeng, Nigel Bosch and Dong Wang

Collect, Measure, Repeat: Reliability Factors for Responsible AI Data Collection by Oana Inel, Tim Draws and Lora Aroyo

Gender bias and stereotypes in Large Language Models by Hadas Kotek, Rikker Dockum and David Sun
9:00am-10:30am:

Panel Session 3: Wu, Hidalgo, Margetts
9:00am-10:30am:

Workshops, DC, Crowdcamp
10:45am-11:00am:

Coffee Break
10:30am-11:00am:

Coffee Break
10:30am-11:00am:

Coffee Break
10:30am-11:00am:

Workshops, DC, Crowdcamp
11:00am-12:00pm:

Session 1: Design
Chair: David Lee

DesignAID: Using generative AI to avoid design fixation by Alice Cai, Steven Rick, Jennifer Heyman, Yanxia Zhang, Alexandre Filipowicz, Matthew Hong, Heishiro Toyoda, Matt Klenk and Thomas Malone

Photo Steward: A Deliberative Collective Intelligence Workflow for Validating Historical Archives by Vikram Mohanty and Kurt Luther

Informing Users about Data Imputation: Exploring the Design Space for Dealing With Non-Responses by Ananya Bhattacharjee, Haochen Song, Xuening Wu, Justice Tomlinson, Mohi Reza, Akmar Ehsan Chowdhury, Nina Deliu, Thomas Price and Joseph Jay Williams
11:00am-12:00pm:

Session 5: Explainability
Chair: Vikram Mohanty

Where Does My Model Underperform? A Human Evaluation of Slice Discovery Algorithms by Nari Johnson, Ángel Alexander Cabrera, Gregory Plumb and Ameet Talwalkar

Selective Concept Models: Permitting Stakeholder Customisation at Test-Time by Matthew Barker, Katherine Collins, Krishnamurthy Dvijotham, Adrian Weller and Umang Bhatt

Training Towards Critical Use: Learning to Situate AI Predictions Relative to Human Knowledge by Anna Kawakami, Luke Guerdan, Yanghuidi Cheng, Kate Glazko, Matthew Lee, Scott Carter, Nikos Arechiga, Haiyi Zhu and Kenneth Holstein
11:00am-12:30pm:

Session 7: Human-AI Collaboration
Chair: Kurt Luther

A Taxonomy of Human and ML Strengths in Decision-Making to Investigate Human-ML Complementarity by Charvi Rastogi, Liu Leqi, Kenneth Holstein and Hoda Heidari

Player-Guided AI outperforms standard AI in Sequence Alignment Puzzles by Renata Mutalova, Roman Sarrazin-Gendron, Parham Ghasemloo Gheidari, Eddie Cai, Gabriel Richard, Sébastien Caisse, Rob Knight, Mathieu Blanchette, Attila Szantner and Jérôme Waldispühl

A task-interdependency model for complex collaboration towards human-centered crowd work by David Lee and Christos Makridis

Designing Ecosystems of Intelligence from First Principles by Karl J. Friston, Maxwell J.D. Ramstead, Alex B. Kiefer1, Alexander Tschantz, Christopher L. Buckley, Mahault Albarracin1, Riddhi J. Pitliya, Conor Heins, Brennan Klein, Beren Millidge, Dalton A.R. Sakthivadivel, Toby St Clere Smithe, Magnus Koudahl, Safae Essafi Tremblay, Capm Petersen, Kaiser Fung, Jason G. Fox, Steven Swanson, Dan Mapes, and Gabriel René
11:00am-12:00pm:

Workshops, DC, Crowdcamp
12:00pm-13:00pm:

Lunch Break
12:00pm-13:00pm:

Lunch Break
12:30pm-12:45pm:

Closing
12:00pm-13:00pm:

Workshops, DC, Crowdcamp
13:00pm-14:30pm:

Session 2: Crowds and Collectives
Chair: Danula Hettiachchi

BackTrace: A Human-AI Collaborative Approach to Discovering Studio Backdrops in Historical Photographs by Jude Lim, Vikram Mohanty, Terryl Dodson and Kurt Luther

Does Human Collaboration Enhance the Accuracy of Identifying LLM-Generated Deepfake Texts? by Adaku Uchendu, Jooyoung Lee, Hua Shen, Thai Le, Ting-Hao Huang and Dongwon Lee

Gone With the Wind: Honey Bee Collective Scenting in the Presence of External Wind by Dieu My T. Nguyen, Golnar Gharooni Fard, Michael L. Iuzzolino and Orit Peleg

Understanding (Ir)rational Herding Online by Henry Dambanemuya, Johannes Wachs and Agnes Horvat
13:00pm-14:30pm:

Panel Session 2: Lehdonvirta, Mulgan, Helberger
13:00pm-14:30pm:

Workshops, DC, Crowdcamp
14:30pm-15:00pm:

Coffee Break
14:30pm-15:00pm:

Coffee Break
14:30pm-15:00pm:

Workshops, DC, Crowdcamp
15:00pm-16:30pm:

Session 3: Crowd Modeling and Optimization
Chair: Adaku Uchendu

A Cluster-Aware Transfer Learning for Bayesian Optimization of Personalized Preference Models in the Crowd Setting by Haruto Yamasaki, Masaki Matsubara, Hiroyoshi Ito, Yuta Nambu, Masahiro Kohjima, Yuki Kurauchi, Ryuji Yamamoto and Atsuyuki Morishima

Crowdsourced Clustering via Active Querying: Practical Algorithm with Theoretical Guarantees by Yi Chen, Ramya Korlakai Vinayak and Babak Hassibi

Rethinking Quality Assurance for Crowdsourced Multi-ROI Image Segmentation by Xiaolu Lu, David Ratcliffe, Tsu-Ting Kao, Aristarkh Tikhonov, Lester Litchfield, Craig Rodger and Kaier Wang

Accounting for Transfer of Learning using Human Behavior Models by Tyler Malloy, Yinuo Du, Fei Fang and Cleotilde Gonzalez
15:00pm-16:30pm:

Session 6: Annotation
Chair: Shiyan Zhang

Characterizing Time Spent in Video Object Tracking Annotation Tasks: A Study of Task Complexity in Vehicle Tracking by Amy Rechkemmer, Alex Williams, Matthew Lease and Li Erran Li

How Crowd Worker Factors Influence Subjective Annotations: A Study of Tagging Misogynistic Hate Speech in Tweets by Danula Hettiachchi, Indigo Holcombe-James, Stephanie Livingstone, Anjalee de Silva, Matthew Lease, Flora D. Salim and Mark Sanderson

Confidence Contours: Uncertainty-Aware Annotation for Medical Semantic Segmentation by Andre Ye, Quan Ze Chen and Amy Zhang

Task as Context: A Sensemaking Perspective on Annotating Inter-Dependent Event Attributes with Non-Experts by Tianyi Li, Ping Wang, Tian Shi, Yali Bian and Andy Esakia

Crowdcamp (Remote session)
15:00pm-17:00pm:

Workshops, DC, Crowdcamp
16:30pm-17:30pm:

Speed talks - WIP & Demo & DC
16:30pm-17:30pm:

Town Hall
Crowdcamp (Remote session) 17:00pm-17:30pm:

Workshops, DC, Crowdcamp
17:30pm-19:00pm:

Welcome reception
19:00pm:

Social Dinner
  • Stay CONNECTED: HCOMP COMMUNITY

We welcome everyone who is interested in crowdsourcing and human computation to:

  • Join crowd-hcomp Google Group (mailing list) to post and receive crowdsourcing and human computation email announcements (e.g., calls-for-papers, job openings, etc.) including updates here about the conference. To subscribe send an email to crowd-hcomp+subscribe@googlegroups.com.
  • Check our Google Group webpage to view the archive of past communications on the HCOMP mailing list.
  • Keep track of our twitter hashtag #HCOMP2024.
  • Join the HCOMP Slack Community to be in touch with researchers, industry players, practitioners, and crowd workers around Human Computation and relevant topics.