Opening (8:55 - 9:00)
Invited Talk: Jeff Bigham (9:00 - 9:40)
“Crowd Agents: Interactive Crowd-Powered Systems in the Real World”
Abstract:
In this talk, I'll discuss several interactive crowd-powered systems that help people address real-world problems. For instance, VizWiz sends questions blind people have about their visual environment to the crowd, Legion allows outsourcing of desktop tasks to the crowd, and Scribe allows the crowd to caption audio in real-time. The thousands of people have engaged with these systems, providing an interesting look at how end users want to interact with crowd work.
Collectively, these systems illustrate a new approach to human computation in which the dynamic crowd is provided the computational support needed to act as a single, high-quality agent. The classic advantage of the crowd has been its wisdom, but our systems are beginning to show how crowd agents can surpass even expert individuals on motor and cognitive performance tasks.
Bio:
Jeffrey P. Bigham is an Assistant Professor in the Department of Computer Science at the University of Rochester where he heads the ROC HCI Group. His work is at the intersection of human-computer interaction, human computation, and artificial intelligence, with a focus on developing innovative technology that serves people with disabilities in their everyday lives. Jeffrey received his B.S.E degree in Computer Science fromPrinceton University in 2003. He received his M.Sc. degree in 2005 and his Ph.D. in 2009, both in Computer Science and Engineering from the University of Washington working with Richard E. Ladner. For his innovative research, Dr. Bigham has won a number of awards, including the Microsoft Imagine Cup Accessible Technology Award, the Andrew W. Mellon Foundation Award for Technology Collaboration, the MIT Technology Review Top 35 Innovators Under 35 Award, and Best Paper Awards at UIST, WSDM, and ASSETS. In 2012, he received the National Science Foundation CAREER Award.
Session 1: Games (9:40 - 10:20)
9:40 - 10:00 Systematic Analysis of Output Agreement Games: Effects of Gaming Environment,
Social Interaction, and Feedback
Shih-Wen Huang, UIUC
Wai-Tat Fu, University of Illinois at Urbana-Champaign
10:00 - 10:20 Doodling: A Gaming Paradigm for Generating Language Data
A Kumaran, Microsoft Research India
Sujay Jauhar, University of Wolverhampton
Sumit Basu, Microsoft Research
1st Coffee Break / Poster Session (10:20 - 11:00)
CAPTCHAs with a Purpose
Suhas Aggarwal, IIT Guwahati
Crowd-Sourcing Design: Sketch Minimization using Crowds for Feedback
David Engel, MIT
Verena Kottler, Max Planck Institute for Developmental Biology
Christoph Malisi, Max Planck Institute for Developmental Biology
Marc Röttig, University of Tuebingen Center for Bioinformatics
Eva Willing, Max Planck Institute for Plant~Breeding Research
Sebastian Schultheiss, Max Planck Institute
To Crowdsource or Not to Crowdsource?
Gireeja Ranade, UC Berkeley
Lav R. Varshney, IBM Thomas J. Watson Research Center
Learning from Crowds and Experts
Hiroshi Kajino, The University of Tokyo
Yuta Tsuboi, IBM Research - Tokyo
Issei Sato, The University of Tokyo
Hisashi Kashima, The University of Tokyo
Squaring and Scripting the ESP Game
François Bry, Ludwig-Maximilian University
Christoph Wieser, Ludwig-Maximilian University of Munich
Automatically providing action plans helps people complete tasks
Nicolas Kokkalis, Stanford
Scott Klemmer, Stanford
Thomas Koehn, Stanford
The Role of Super Agents in Mobile Crowdsourcing
Mohamed Musthag, Univ. of Massachusetts, Amhers
Deepak Ganesan, University of Massachusetts
Detecting Deceptive Opinion Spam using Human Computation
Christopher Harris, The University of Iowa
Improving Quality of Crowdsourced Labels via Probabilistic Matrix Factorization
Hyun Joon Jung, University of Texas at Austin
Matthew Lease, University of Texas at Austin
Towards Social Norm Design for Crowdsourcing Markets
Chien-Ju Ho, UCLA
Yu Zhang, UCLA
Jennifer Wortman Vaughan, UCLA
Mihaela van der Schaar, UCLA
Session 2: Machine Learning (11:00 - 11:40)
11:00 - 11:20 Crowdclustering with Sparse Pairwise Labels: A Matrix Completion Approach
Jinfeng Yi, Michigan State University
Rong Jin, Michigan State University
Anil Jain, Michigan State University
Shaili Jain, Yale University
11:20 - 11:40 Crowdsourcing Control: Moving Beyond Multiple Choice
Christopher Lin, University of Washington
Mausam, University of Washington
Daniel Weld, University of Washington
Session 3: Platforms (11:40 - 12:20)
11:40 - 12:00 TurkServer: Enabling Synchronous and Longitudinal Online Experiments
Andrew Mao, Harvard University
Yiling Chen, Harvard University
Krzysztof Gajos, Harvard University
David Parkes, Harvard University
Ariel Procaccia, Carnegie Mellon University
Haoqi Zhang, Harvard University
12:00 - 12:20 MobileWorks: A Non-Marketplace Architecture for Accurate Human Computation
Anand Kulkarni*, MobileWorks / UC Berkeley
Philipp Gutheim, MobileWorks, UC Berkeley
Prayag Narula, MobileWorks, UC Berkeley
David Rolnitzky, MobileWorks
Tapan Parikh, University of California, Berkeley
Bjoern Hartmann, University of California, Berkeley
Lunch (12:20 - 1:40)
Invited Talk: Adam Kalai (1:40 - 2:20)
“Programming by Example Revisted”
Abstract: An old dream is to program computers through examples rather than by writing code. One particularly common domain is text-processing: automating tasks such as extracting author names from a bibliography, removing duplicate entries in a log file. As in many human computation problems, a key aspect of the problem is breaking a task into small steps, each of which is easy to understand. Over the years, several systems have been built with this aim, yet none have been widely adopted. We discuss why this is the case, surveying the earlier systems, and describe what a handful of researchers are doing to address the problem.
Joint work with: Sumit Gulwani, Butler Lampson, Aditya Menon, Rob Miller, Omer Tamuz, Shubham Tulsiani, and Kuat Yessenov
Bio: Adam Tauman Kalai received his BA (1996) from Harvard, and MA (1998) and PhD (2001) under the supervision of Avrim Blum from CMU. After an NSF postdoctoral fellowship at M.I.T. with Santosh Vempala, he served as an assistant professor at the Toyota Technological institute at Chicago and then at Georgia Tech. He is now a Senior Researcher at Microsoft Research New England. His honors include an NSF CAREER award, and an Alfred P. Sloan fellowship. His research focuses on computational learning theory, human computation, and algorithms.
Session 4: Applications (2:20 - 3:40)
2:20 - 2:40 Crowdsourcing Annotations for Visual Object Detection
Hao Su, Stanford University
Jia Deng, Stanford University
Fei-Fei Li, Stanford University
2:40 - 3:00 Part Annotations via Pairwise Correspondence
Subhransu Maji, TTIC
Greg Shakhnarovich, TTIC
3:00 - 3:20 Contextual Commonsense Knowledge Acquisition from Social Content by
Crowdsourcing Explanations
Yen-Ling Kuo, National Taiwan University
Jane Yung-jen Hsu, National Taiwan University
Fuming Shih,MIT
3:20 - 3:40 Hallucination: a mixed-initiative approach for efficient document reconstruction
Haoqi Zhang, Harvard University
John Lai, Harvard University
Moritz Baecher, Harvard University
2nd Coffee Break / Poster Session (3:40 - 4:20)
Social Choice for Human Computation
Andrew Mao, Harvard University
Ariel Procaccia, Carnegie Mellon University
Yiling Chen, Harvard University
Predicting Crowd-based Translation Quality with Language-independent Feature Vectors
Markus Krause, University of Bremen
Jan Smeddinck, University of Bremen
Niklas Kilian,
Nina Runge, Uni Bremen
Machine-learning for Spammer Detection in Crowd-sourcing
Harry Halpin, MIT
Roi Blanco , Yahoo! Research
Crowdsourcing: Dynamically Switching between Synergistic Workflows
Christopher Lin, University of Washington
Mausam, University of Washington
Daniel Weld, University of Washington
Learning Sociocultural Knowledge via Crowdsourced Examples
Mark Riedl, Georgia Institute of Technolog
Boyang Li, Georgia Institute of Technology
Stephen Lee-Urban, Georgia Institute of Technology
Darren Appling, Georgia Institute of Technology
Playful Surveys: Easing Challenges of Human Subject Research with Online Crowds
Markus Krause, University of Bremen
Jan Smeddinck, University of Bremen
Aneta Takhtamysheva, University of Bremen
Velislav Markov, University of Bremen
Nina Runge, Uni Bremen
Personalized Online Education—A Crowdsourcing Challenge
Daniel Weld, University of Washington
Eric Horvitz, Microsoft Research
Raphael Hoffmann, University of Washington
Eytan Adar, University of Michigan
Lydia Chilton, Unviersity of Washington
Mitchell Koch, Unviersity of Washington
Christopher Lin, University of Washington
Mausam, University of Washington
Using the Crowd to Do Natural Language Programming
Mehdi Manshadi, University of Rochester
Carolyn Keenan, University of Rochester
James Allen, University of Rochester
Diamonds From the Rough: Improving Drawing, Painting, and Singing via Crowdsourcing
Yotam Gingold, Rutgers / Columbia
Etienne Vouga, Columbia University
Eitan Grinspun, Columbia University
Haym Hirsh, Rutgers University
Collecting Representative Pictures for Words: A Human Computation Approach based on Draw Something Game
Jun Wang, Syracuse University
Bei Yu, Syracuse University
Business Meeting: 1st AAAI Human Computation Conference 2013 (4:20 - 5:20)
led by Eric Horvtiz
Closing (5:20-5:30)
Workshop Dinner (6:30 - 9:00)