Keynotes

Speaker: Jenn Wortman Vaughan

Microsoft

Some Very Human Challenges in Responsible AI (Or Why My Research Trajectory Took a Surprising Turn)

Abstract

In this talk, I'll overview some of the challenges that arise in supporting AI fairness, interpretability, and responsible AI more broadly, in industry practice. I'll examine these challenges through the lens of three case studies drawn from my own research experiences: disaggregated evaluations, dataset documentation, and interpretability tools. These examples illustrate the importance of interdisciplinary research and human-centered approaches to responsible AI.

Speaker Biography

Jenn Wortman Vaughan is a Senior Principal Researcher at Microsoft Research, New York City. She currently focuses on Responsible AI—including transparency, interpretability, and fairness—as part of MSR's FATE group and co-chair of Microsoft’s Aether Working Group on Transparency. Jenn's research background is in machine learning and algorithmic economics. She is especially interested in the interaction between people and AI, and has often studied this interaction in the context of prediction markets and other crowdsourcing systems. Jenn came to MSR in 2012 from UCLA, where she was an assistant professor in the computer science department. She completed her Ph.D. at the University of Pennsylvania in 2009, and subsequently spent a year as a Computing Innovation Fellow at Harvard. She is the recipient of Penn's 2009 Rubinoff dissertation award for innovative applications of computer technology, a National Science Foundation CAREER award, a Presidential Early Career Award for Scientists and Engineers (PECASE), and a variety of best paper awards. Jenn co-founded the Annual Workshop for Women in Machine Learning (WiML), which has been held each year since 2006, and recently served as Program Co-chair of NeurIPS 2021.

Speaker: Jeffery Bigham

Carnegie Mellon University & Apple

Re-Remembering the Humans Throughout the Loop

Abstract

In both HCI and ML we use the abstraction of a loop to describe how humans and technology work together, yet the development of intelligent systems is a messy iterative and divergent process that is a far cry from the circles in our diagrams. 17 years ago, I set out to build my first system that would describe visual images for people who are blind, and I’m still working on it. In this talk, I’ll review the humans and loops uncovered while working on what seems like a straightforward problem, as an illustrative example of the twists and turns of technology and people that sometimes come together to deepen our understanding. I’ll use this as context to relate human computation to both past and future conceptual framings for humans working alongside intelligent systems, and might even risk making a few predictions.

Speaker Biography

Jeffrey Bigham's research combines computation and crowds to make novel deployable interactive systems, and ultimately solve hard problems in computer science. The systems he and his lab has created combine machine learning and real-time crowdsourcing in domains like (i) access technology, (ii) interactive dialog systems, and (iii) support for crowd/gig workers. Much of his work focuses on accessibility because he sees the field as a window into the future, given that people with disabilities are often the earliest adopters of AI. Bigham is an Associate Professor in the Human-Computer Interaction and Language Technologies Institutes in the School of Computer Science at Carnegie Mellon University. He also leads the Human-Centered Machine Intelligence Research Group at Apple. Bigham received a B.S.E degree in Computer Science from Princeton University in 2003, and a Ph.D. in Computer Science and Engineering from the University of Washington in 2009. He has won the Alfred P. Sloan Foundation Fellowship, the MIT Technology Review Top 35 Innovators Under 35 Award, and the NSF CAREER Award.

Speaker: Seth Cooper

Northeastern University

Games for Crowdsourcing and Crowdsourcing for Games

Abstract

Where is there potential for crowdsourcing and video games to benefit each other? In this talk, I will discuss examples of how techniques from games might be applied to crowdsourcing and how crowdsourcing might be applied to games. Approaches to dynamic difficulty adjustment from games can be used in crowdsourcing to improve task assignment to members of the crowd. Crowdsourced player recruitment can be used to enable rapid design iteration by streamlining playtesting for games. Finally, we can close the loop and use crowdsourced playtesting to improve crowdsourcing games themselves.

Speaker Biography

Seth Cooper is an Associate Professor in the Khoury College of Computer Sciences at Northeastern University. He earned his PhD in Computer Science and Engineering at the University of Washington. Seth's work focuses on video games, crowdsourcing, and the combination of the two. He is co-creator of Foldit, a video game that has allowed hundreds of thousands of players to be involved in scientific research in biochemistry. Seth has previously worked at Square Enix, Electronic Arts, and Pixar Animation Studios, and as the Creative Director of the Center for Game Science.

  • Stay CONNECTED: HCOMP COMMUNITY

We welcome everyone who is interested in crowdsourcing and human computation to:

  • Join crowd-hcomp Google Group (mailing list) to post and receive crowdsourcing and human computation email announcements (e.g., calls-for-papers, job openings, etc.) including updates here about the conference. To subscribe send an email to crowd-hcomp+subscribe@googlegroups.com.
  • Check our Google Group webpage to view the archive of past communications on the HCOMP mailing list.
  • Visit the HCOMP blog where we post about new ideas for ideas related to crowd and social computing research.
  • Keep track of our twitter hashtag #HCOMP2022.
  • Join the HCOMP Slack Community to be in touch with researchers, industry players, practitioners, and crowd workers around Human Computation and relevant topics.