AAAI HCOMP is the premier venue for disseminating the latest research findings on crowdsourcing and human computation. While artificial intelligence (AI) and human-computer interaction (HCI) represent traditional mainstays of the conference, HCOMP believes strongly in inviting, fostering, and promoting broad, interdisciplinary research. This field is particularly unique in the diversity of disciplines it draws upon, and contributes to, ranging from human-centered qualitative studies and HCI design, to computer science and artificial intelligence, economics and the social sciences, all the way to policy and ethics. We promote the exchange of scientific advances in human computation and crowdsourcing not only among researchers, but also engineers and practitioners, to encourage dialogue across disciplines and communities of practice.
Past meetings include eight AAAI HCOMP conferences (2013-2020) and four earlier workshops, held at the AAAI Conference on Artificial Intelligence (2011-2012), and the ACM SIGKDD Conference on Knowledge Discovery and Data Mining (2009-2010).
An overview of the history, goals, and peer review procedures of the conference can be found in the preface to the HCOMP-13 proceedings. Additional background on the founding of the conference is discussed in a Computing Research News story.
Best Paper Award
Motivating Novice Crowd Workers through Goal Setting: An Investigation into the Effects on Complex Crowdsourcing Task Training
Amy Rechkemmer and Ming Yin
Best Student Paper
Impact of Algorithmic Decision Making on Human Behavior: Evidence from Ultimatum Bargaining
Alexander Erlei, Franck Awounang Nekdem, Lukas Meub, Avishek Anand and Ujwal Gadiraju
Best Blue Sky Ideas
First Place: Using Human Cognitive Limitations to Enable New Systems
Vincent Conitzer
Second Place: Group-Assign: Type Theoretic Framework for Human AI Orchestration
Aik Beng Ng, Zhangsheng Lai, Simon See and Shaowei Lin
Best Demo Award
OpenTag: Understanding Human Perceptions of Image Tagging Algorithms
Kyriakos Kyriakou, Pınar Barlas, Styliani Kleanthous and Jahna Otterbacher
Best Work-in-Progress Award
Assessing Political Bias using Crowdsourced Pairwise Comparisons
Tzu-Sheng Kuo, Mcardle Hankin, Miranda Li, Andrew Ying and Cathy Wang
Best Paper Award
A Large-Scale Study of the "Wisdom of Crowds"
Camelia Simoiu, Chiraag Sumanth, Alok Shankar and Sharad Goel
Best Paper Finalists
Human Evaluation of Models Built for Interpretability
Isaac Lage, Emily Chen, Jeffrey He, Menaka Narayanan, Been Kim, Samuel Gershman and Finale Doshi-Velez
Fair Work: Crowd Work Minimum Wage with One Line of Code
Mark Whiting, Grant Hugh and Michael Bernstein
Best Poster / Demo Presentation
PairWise: Mitigating Political Bias in Crowdsourced Content Moderation
Jacob Thebault-Spieker, Sukrit Venkatagiri, David Mitchell, Chris Hurt and Kurt Luther
Best Paper Award
All That Glitters is Gold -- An Attack Scheme on Gold Questions in Crowdsourcing
Alessandro Checco, Jo Bates and Gianluca Demartini
Best Paper Finalists
Towards Accountable AI: Hybrid Human-Machine Analyses for Characterizing System Failure
Besmira Nushi, Ece Kamar and Eric Horvitz
Best Poster / Demo Presentation
Are 1,000 Features Worth A Picture? Combining Crowdsourcing and Face Recognition to Identify Civil War Soldiers
Vikram Mohanty, David Thames, Kurt Luther
Best Paper Award
Toward Scalable Social Alt Text: Conversational Crowdsourcing as a Tool for Refining Vision-to-Language Technology for the Blind
Elliot Salisbury, Ece Kamar, and Meredith Ringel Morris
Best Paper Finalists
Supporting Image Geolocation with Diagramming and Crowdsourcing
Rachel Kohler, John Purviance, and Kurt Luther
Best Paper Award
Why Is That Relevant? Collecting Annotator Rationales for Relevance Judgments
Tyler Mcdonnell, Matthew Lease, Mucahid Kutlu and Tamer Elsayed
Best Paper Finalists
Efficient Techniques for Crowdsourced Top-k Lists
Luca de Alfaro, Vassilis Polychronopoulos and Neoklis Polyzotis
Extending Workers' Attention Span Through Dummy Events
Avshalom Elmalech, David Sarne, Esther David and Chen Hajaj
Best Paper Award
Best Paper Finalists
Crowdsourcing from Scratch: A Pragmatic Experiment in Data Collection by Novice Requesters
|
Best Paper Award
STEP: A Scalable Testing and Evaluation Platform
Maria Christoforaki and Panos Ipeirotis
Best Paper Finalists
A Crowd of Your Own: Crowdsourcing for On-Demand Personalization
Peter Organisciak, Jaime Teevan, Susan Dumais, Robert Miller, and Adam Tauman Kalai
Crowdsourcing for Participatory Democracies: Efficient Elicitation of Social Choice Functions
David Lee, Ashish Goel, Tanja Aitamurto, and Helene Landemore
Best Paper Award
Crowdsourcing Multi-Label Classification for Taxonomy Creation
Jonathan Bragg, Mausam, and Daniel S. Weld
Best Paper Finalists
nEmesis: Which Restaurants Should You Avoid Today?
Adam Sadilek, Sean Brennan, Henry Kautz, and Vincent Silenzio
Community Clustering: Leveraging an Academic Crowd to Form Coherent Conference Sessions
Paul Andre, Haoqi Zhang, Juho Kim, Lydia Chilton, Steven P. Dow, and Robert C. Miller
Workshops Held at the First AAAI Conference on Human Computation and Crowdsourcing: A Report. Tatiana Josephy, Matthew Lease, Praveen Paritosh, Markus Krause, Mihai Georgescu, Michael Tjalve, and Daniela Braga. AI Magazine, 35(2), 75-78, 2014.
Shar Steed. Harnessing Human Intellect for Computing. Computing Research Association (CRA) Computing Research News, Vol. 25 No. 2, Feburary 2013.
A report on the human computation workshop (HCOMP 2009). Panagiotis G. Ipeirotis, Raman Chandrasekar, and Paul N. Bennett. SIGKDD Explorations 11, no. 2 (2009): 80-83.
We welcome everyone who is interested in crowdsourcing and human computation to: