Accepted Papers

  • Tracks

Accepted papers are grouped by track, with a link to each track group provided below.

Full Papers

A Hybrid Approach to Identifying Unknown Unknowns of Predictive Models
  • Colin Vandenhof
A Large-Scale Study of the "Wisdom of Crowds'"
  • Camelia Simoiu, Chiraag Sumanth, Alok Shankar and Sharad Goel
AI-based Request Augmentation to Increase Crowdsourcing Participation
  • Ranjay Krishna, Li Fei-Fei, Michael Bernstein, Pranav Khadpe and Junwon Park
Beyond Accuracy: On the Role of Mental Models in Human-AI Teams
  • Gagan Bansal, Besmira Nushi, Ece Kamar, Daniel Weld, Walter Lasecki and Eric Horvitz
Can You Explain That? Lucid Explanations Help Human-AI Collaborative Image Retrieval
  • Arijit Ray, Yi Yao, Rakesh Kumar, Ajay Divakaran and Giedrius Burachas
Crowdsourced PAC Learning under Classification Noise
  • Shelby Heinecke and Lev Reyzin
Fair Work: Crowd Work Minimum Wage with One Line of Code
  • Mark Whiting, Grant Hugh and Michael Bernstein
Gamification of Loop-Invariant Discovery from Code
  • Andrew Walter, Benjamin Boskin, Seth Cooper and Panagiotis Manolios
Going Against the (Appropriate) Flow: A Contextual Integrity Approach to Privacy Policy Analysis
  • Yan Shvartzshnaider, Noah Apthorpe, Nick Feamster and Helen Nissenbaum.
How Do We Talk about Other People? Group (Un)Fairness in Natural Language Image Descriptions
  • Jahna Otterbacher, Pınar Barlas, Styliani Kleanthous and Kyriakos Kyriakou
Human Evaluation of Models Built for Interpretability
  • Isaac Lage, Emily Chen, Jeffrey He, Menaka Narayanan, Been Kim, Samuel Gershman and Finale Doshi-Velez
Interpretable Image Recognition with Hierarchical Prototypes
  • Peter Hase, Chaofan Chen, Oscar Li and Cynthia Rudin
Learning to Predict Population-Level Label Distributions
  • Tong Liu, Pratik Sanjay Bongale, Akash Venkatachalam and Christopher Homan
Not Everyone Can Write Great Examples But Great Examples Can Come From Anywhere
  • Shayan Doroudi, Ece Kamar and Emma Brunskill
Platform-related Factors in Repeatability and Reproducibility of Crowdsourcing Tasks
  • Rehab Qarout, Alessandro Checco, Gianluca Demartini and Kalina Bontcheva
Progression In A Language Annotation Game With A Purpose
  • Chris Madge, Juntao Yu, Jon Chamberlain, Udo Kruschwitz, Silviu Paun and Massimo Poesio
Second Opinion: Supporting last-mile person identification with crowdsourcing and face recognition
  • Vikram Mohanty, Kareem Abdol-Hamid, Courtney Ebersohl and Kurt Luther
Testing Stylistic Interventions to Reduce Emotional Impact of Content Moderation Workers
  • Sowmya Karunakaran and Rashmi Ramakrishnan
The Effects of Meaningful and Meaningless Explanations on Trust and Perceived System Accuracy in Intelligent Systems
  • Mahsan Nourani, Samia Kabir, Sina Mohseni and Eric Ragan
Understanding the Impact of Text Highlighting in Crowdsourcing Tasks
  • Jorge Ramirez, Marcos Baez, Fabio Casati and Boualem Benatallah
What You See is What You Get? The Impact of Representation Criteria on Human Bias in Hiring
  • Andi Peng, Besmira Nushi, Emre Kiciman, Kori Inkpen, Siddharth Suri and Ece Kamar
Who is in Your Top Three? Optimizing Learning in Elections with Many Candidates
  • Nikhil Garg, Lodewijk Gelauff, Sukolsak Sakshuwong and Ashish Goel