AAAI HCOMP is the premier venue for disseminating the latest research findings on crowdsourcing and human computation. While artificial intelligence (AI) and human-computer interaction (HCI) represent traditional mainstays of the conference, HCOMP believes strongly in inviting, fostering, and promoting broad, interdisciplinary research. This field is particularly unique in the diversity of disciplines it draws upon, and contributes to, ranging from human-centered qualitative studies and HCI design, to computer science and artificial intelligence, economics and the social sciences, all the way to policy and ethics. We promote the exchange of scientific advances in human computation and crowdsourcing not only among researchers, but also engineers and practitioners, to encourage dialogue across disciplines and communities of practice.
Past meetings include eleven AAAI HCOMP conferences (2013-2023) and four earlier workshops, held at the AAAI Conference on Artificial Intelligence (2011-2012), and the ACM SIGKDD Conference on Knowledge Discovery and Data Mining (2009-2010).
An overview of the history, goals, and peer review procedures of the conference can be found in the preface to the HCOMP-13 proceedings. Additional background on the founding of the conference is discussed in a Computing Research News story .
Best Paper Award
Investigating What Factors Influence Users'
Rating of Harmful Algorithmic Bias and
Discrimination
Sara Kingsley, Jiayin Zhi, Wesley Hanwen
Deng, Jaimie Lee, Sizhe Zhang, Motahhare Eslami,
Kenneth Holstein, Jason I. Hong, Tianshi Li and
Hong Shen
Honorable Mention Paper Awards
The Atlas of AI Risks: Enhancing Public
Understanding of AI Risks
Edyta Bogucka, Sanja Scepanovic and
Daniele Quercia
Utility-Oriented Knowledge Graph Accuracy
Estimation with Limited Annotations: A Case
Study on DBpedia
Stefano Marchesin, Gianmaria Silvello and
Omar Alonso
Best Paper Awards
Where Does My Model Underperform? A Human
Evaluation of Slice Discovery Algorithms
Nari Johnson, Ángel Alexander Cabrera,
Gregory Plumb and Ameet Talwalkar
Confidence Contours: Uncertainty-Aware
Annotation for Medical Semantic
Segmentation
Andre Ye, Quan Ze Chen and Amy Zhang
Honorable Mention Paper Awards
Collect, Measure, Repeat: Reliability
Factors for Responsible AI Data
Collection
Oana Inel, Tim Draws and Lora Aroyo
A Taxonomy of Human and ML Strengths in
Decision-Making to Investigate Human-ML
Complementarity
Charvi Rastogi, Liu Leqi, Kenneth Holstein
and Hoda Heidari
Best Paper Award
It Is like Finding a Polar Bear in the
Savannah! Concept-Level AI Explanations with
Analogical Inference from Commonsense
Knowledge
Gaole He, Agathe Balayn, Stefan Buijsman,
Jie Yang, and Ujwal Gadiraju
Honorable Mention Paper Awards
Connecting Algorithmic Research and Usage
Contexts: A Perspective of Contextualized
Evaluation for Explainable AI
Q. Vera Liao, Yunfeng Zhang, Ronny Luss,
Finale Doshi-Velez, and Amit Dhurandhar
Near-Optimal Reviewer Splitting in
Two-Phase Paper Reviewing and Conference
Experiment Design
Steven Jecmen, Hanrui Zhang, Ryan Liu, Fei
Fang, Vincent Conitzer, and Nihar B. Shah
Best Poster/Demo Award
Human-in-the-loop mixup
Katherine Collins, Umang Bhatt, Weiyang
Liu, Bradley Love, and Adrian Weller
Best Paper Award
A Checklist to Combat Cognitive Biases in
Crowdsourcing
Tim Draws, Alisa Rieger, Oana Inel, Ujwal
Gadiraju and Nava Tintarev
Honorable Mention Paper Awards
On the Bayesian Rational Assumption in
Information Design
Wei Tang and Chien-Ju Ho
Rapid Instance-level Knowledge Acquisition
from Class-level Common Sense
Chris Welty, Lora Aroyo, Flip Korn, Sara
Marie Mc Carthy and Shubin Zhao
Best Blue Sky Ideas
First Place:
The Science of Rejection: A Research Area
for Human Computation
Burcu Sayin, Jie Yang, Andrea Passerini
and Fabio Casati
Second Place:
Human Computation and Crowdsourcing for
Earth
Yasaman Rohanifar, Syed Ishtiaque Ahmed,
Sharifa Sultana, Prateek Chanda and Malay
Bhattacharyya
Third Place:
Human in the Loop for Machine Creativity
Neo Christopher Chung
Best Poster/Demo Award
FindItOut: A Multiplayer GWAP for Collecting
Plural Knowledge
Agathe Balayn, Gaole He, Andrea Hu, Jie
Yang and Ujwal Gadiraju
Best Paper Award
Motivating Novice Crowd Workers through
Goal Setting: An Investigation into the
Effects on Complex Crowdsourcing Task
Training
Amy Rechkemmer and Ming Yin
Best Student Paper
Impact of Algorithmic Decision Making on
Human Behavior: Evidence from Ultimatum
Bargaining
Alexander Erlei, Franck Awounang Nekdem,
Lukas Meub, Avishek Anand and Ujwal Gadiraju
Best Blue Sky Ideas
First Place:
Using Human Cognitive Limitations to Enable
New Systems
Vincent Conitzer
Second Place:
Group-Assign: Type Theoretic Framework for
Human AI Orchestration
Aik Beng Ng, Zhangsheng Lai, Simon See and
Shaowei Lin
Best Demo Award
OpenTag: Understanding Human Perceptions of
Image Tagging Algorithms
Kyriakos Kyriakou, Pınar Barlas, Styliani
Kleanthous and Jahna Otterbacher
Best Work-in-Progress Award
Assessing Political Bias using Crowdsourced
Pairwise Comparisons
Tzu-Sheng Kuo, Mcardle Hankin, Miranda Li,
Andrew Ying and Cathy Wang
Best Paper Award
A Large-Scale Study of the "Wisdom of
Crowds"
Camelia Simoiu, Chiraag Sumanth, Alok
Shankar and Sharad Goel
Best Paper Finalists
Human Evaluation of Models Built for
Interpretability
Isaac Lage, Emily Chen, Jeffrey He, Menaka
Narayanan, Been Kim, Samuel Gershman and Finale
Doshi-Velez
Fair Work: Crowd Work Minimum Wage with One
Line of Code
Mark Whiting, Grant Hugh and Michael
Bernstein
Best Poster / Demo Presentation
PairWise: Mitigating Political Bias in
Crowdsourced Content Moderation
Jacob Thebault-Spieker, Sukrit
Venkatagiri, David Mitchell, Chris Hurt and Kurt
Luther
Best Paper Award
All That Glitters is Gold -- An Attack
Scheme on Gold Questions in Crowdsourcing
Alessandro Checco, Jo Bates and Gianluca
Demartini
Best Paper Finalists
Towards Accountable AI: Hybrid
Human-Machine Analyses for Characterizing
System Failure
Besmira Nushi, Ece Kamar and Eric Horvitz
Best Poster / Demo Presentation
Are 1,000 Features Worth A Picture? Combining
Crowdsourcing and Face Recognition to Identify
Civil War Soldiers
Vikram Mohanty, David Thames, Kurt Luther
Best Paper Award
Toward Scalable Social Alt Text:
Conversational Crowdsourcing as a Tool for
Refining Vision-to-Language Technology for
the Blind
Elliot Salisbury, Ece Kamar, and Meredith
Ringel Morris
Best Paper Finalists
Supporting Image Geolocation with
Diagramming and Crowdsourcing
Rachel Kohler, John Purviance, and Kurt Luther
Best Paper Award
Why Is That Relevant? Collecting Annotator
Rationales for Relevance Judgments
Tyler Mcdonnell, Matthew Lease, Mucahid Kutlu
and Tamer Elsayed
Best Paper Finalists
Efficient Techniques for Crowdsourced Top-k
Lists
Luca de Alfaro, Vassilis Polychronopoulos and
Neoklis Polyzotis
Extending Workers' Attention Span Through
Dummy Events
Avshalom Elmalech, David Sarne, Esther David and
Chen Hajaj
Best Paper Award
Best Paper Finalists
Crowdsourcing from Scratch: A
Pragmatic Experiment in Data
Collection by Novice
Requesters
|
Best Paper Award
STEP: A Scalable Testing and Evaluation
Platform
Maria Christoforaki and Panos Ipeirotis
Best Paper Finalists
A Crowd of Your Own: Crowdsourcing for
On-Demand Personalization
Peter Organisciak, Jaime Teevan, Susan Dumais,
Robert Miller, and Adam Tauman Kalai
Crowdsourcing for Participatory
Democracies: Efficient Elicitation of Social
Choice Functions
David Lee, Ashish Goel, Tanja Aitamurto, and
Helene Landemore
Best Paper Award
Crowdsourcing Multi-Label Classification
for Taxonomy Creation
Jonathan Bragg, Mausam, and Daniel S. Weld
Best Paper Finalists
nEmesis: Which Restaurants Should You Avoid
Today?
Adam Sadilek, Sean Brennan, Henry Kautz, and
Vincent Silenzio
Community Clustering: Leveraging an
Academic Crowd to Form Coherent Conference
Sessions
Paul Andre, Haoqi Zhang, Juho Kim, Lydia
Chilton, Steven P. Dow, and Robert C. Miller
Workshops Held at the First AAAI Conference on Human Computation and Crowdsourcing: A Report. Tatiana Josephy, Matthew Lease, Praveen Paritosh, Markus Krause, Mihai Georgescu, Michael Tjalve, and Daniela Braga. AI Magazine, 35(2), 75-78, 2014.
Shar Steed. Harnessing Human Intellect for Computing. Computing Research Association (CRA) Computing Research News, Vol. 25 No. 2, Feburary 2013.
A report on the human computation workshop (HCOMP 2009). Panagiotis G. Ipeirotis, Raman Chandrasekar, and Paul N. Bennett. SIGKDD Explorations 11, no. 2 (2009): 80-83.
We welcome everyone who is interested in crowdsourcing and human computation to: