Overview

AAAI HCOMP is the premier venue for disseminating the latest research findings on crowdsourcing and human computation. While artificial intelligence (AI) and human-computer interaction (HCI) represent traditional mainstays of the conference, HCOMP believes strongly in inviting, fostering, and promoting broad, interdisciplinary research. This field is particularly unique in the diversity of disciplines it draws upon, and contributes to, ranging from human-centered qualitative studies and HCI design, to computer science and artificial intelligence, economics and the social sciences, all the way to policy and ethics. We promote the exchange of scientific advances in human computation and crowdsourcing not only among researchers, but also engineers and practitioners, to encourage dialogue across disciplines and communities of practice.

Past meetings include seven AAAI HCOMP conferences (2013-2019) and four earlier workshops, held at the AAAI Conference on Artificial Intelligence (2011-2012), and the ACM SIGKDD Conference on Knowledge Discovery and Data Mining (2009-2010).

An overview of the history, goals, and peer review procedures of the conference can be found in the preface to the HCOMP-13 proceedings. Additional background on the founding of the conference is discussed in a Computing Research News story.

Past Meetings


HCOMP 2013 Photo Album

Past Proceedings

Past Paper Awards

  • 2019

Best Paper Award
A Large-Scale Study of the "Wisdom of Crowds"
Camelia Simoiu, Chiraag Sumanth, Alok Shankar and Sharad Goel

Best Paper Finalists
Human Evaluation of Models Built for Interpretability
Isaac Lage, Emily Chen, Jeffrey He, Menaka Narayanan, Been Kim, Samuel Gershman and Finale Doshi-Velez
Fair Work: Crowd Work Minimum Wage with One Line of Code
Mark Whiting, Grant Hugh and Michael Bernstein

Best Poster / Demo Presentation
PairWise: Mitigating Political Bias in Crowdsourced Content Moderation
Jacob Thebault-Spieker, Sukrit Venkatagiri, David Mitchell, Chris Hurt and Kurt Luther

  • 2018

Best Paper Award
All That Glitters is Gold -- An Attack Scheme on Gold Questions in Crowdsourcing
Alessandro Checco, Jo Bates and Gianluca Demartini

Best Paper Finalists
Towards Accountable AI: Hybrid Human-Machine Analyses for Characterizing System Failure
Besmira Nushi, Ece Kamar and Eric Horvitz

Best Poster / Demo Presentation
Are 1,000 Features Worth A Picture? Combining Crowdsourcing and Face Recognition to Identify Civil War Soldiers
Vikram Mohanty, David Thames, Kurt Luther

  • 2017

Best Paper Award
Toward Scalable Social Alt Text: Conversational Crowdsourcing as a Tool for Refining Vision-to-Language Technology for the Blind
Elliot Salisbury, Ece Kamar, and Meredith Ringel Morris

Best Paper Finalists
Supporting Image Geolocation with Diagramming and Crowdsourcing
Rachel Kohler, John Purviance, and Kurt Luther

  • 2016

Best Paper Award
Why Is That Relevant? Collecting Annotator Rationales for Relevance Judgments
Tyler Mcdonnell, Matthew Lease, Mucahid Kutlu and Tamer Elsayed

Best Paper Finalists
Efficient Techniques for Crowdsourced Top-k Lists
Luca de Alfaro, Vassilis Polychronopoulos and Neoklis Polyzotis

Extending Workers' Attention Span Through Dummy Events
Avshalom Elmalech, David Sarne, Esther David and Chen Hajaj

  • 2015

Best Paper Award
Crowdsourcing in the Field: A Case Study Using Local Crowds for Event Reporting
Elena Agapie, Jaime Teevan, and Andres Monroy-Hernandez

Best Paper Finalists
Tropel: Crowdsourcing Detectors with Minimal Training
Genevieve Patterson, Grant Van Horn, Serge Belongie, Pietro Perona, and James Hays

Crowdsourcing from Scratch: A Pragmatic Experiment in Data Collection by Novice Requesters
Alexandra Papoutsaki, Hua Guo, Metaxa-Kakavouli, Connor Gramazio, Jeff Rasley, Wenting Xie, Guan Wang, and Jeff Huang

  • 2014

Best Paper Award
STEP: A Scalable Testing and Evaluation Platform
Maria Christoforaki and Panos Ipeirotis

Best Paper Finalists
A Crowd of Your Own: Crowdsourcing for On-Demand Personalization
Peter Organisciak, Jaime Teevan, Susan Dumais, Robert Miller, and Adam Tauman Kalai

Crowdsourcing for Participatory Democracies: Efficient Elicitation of Social Choice Functions
David Lee, Ashish Goel, Tanja Aitamurto, and Helene Landemore

  • 2013

Best Paper Award
Crowdsourcing Multi-Label Classification for Taxonomy Creation
Jonathan Bragg, Mausam, and Daniel S. Weld

Best Paper Finalists
nEmesis: Which Restaurants Should You Avoid Today?
Adam Sadilek, Sean Brennan, Henry Kautz, and Vincent Silenzio

Community Clustering: Leveraging an Academic Crowd to Form Coherent Conference Sessions
Paul Andre, Haoqi Zhang, Juho Kim, Lydia Chilton, Steven P. Dow, and Robert C. Miller

Library

  • Past Reports

Workshops Held at the First AAAI Conference on Human Computation and Crowdsourcing: A Report. Tatiana Josephy, Matthew Lease, Praveen Paritosh, Markus Krause, Mihai Georgescu, Michael Tjalve, and Daniela Braga. AI Magazine, 35(2), 75-78, 2014.

Shar Steed. Harnessing Human Intellect for Computing. Computing Research Association (CRA) Computing Research News, Vol. 25 No. 2, Feburary 2013.

A report on the human computation workshop (HCOMP 2009). Panagiotis G. Ipeirotis, Raman Chandrasekar, and Paul N. Bennett. SIGKDD Explorations 11, no. 2 (2009): 80-83.

  • Stay CONNECTED: HCOMP COMMUNITY

We welcome everyone who is interested in crowdsourcing and human computation to:

  • Join crowd-hcomp Google Group (mailing list) to post and receive crowdsourcing and human computation email announcements (e.g., calls-for-papers, job openings, etc.) including updates here about the conference. To subscribe send an email to crowd-hcomp+subscribe@googlegroups.com.
  • Check our Google Group webpage to view the archive of past communications on the HCOMP mailing list.
  • Keep track of our twitter hashtag #HCOMP2024.
  • Join the HCOMP Slack Community to be in touch with researchers, industry players, practitioners, and crowd workers around Human Computation and relevant topics.