Workshops

 

W1: Data Excellence

Date and Time October 26, 2020 2:00pm - 6:20pm CET

Organizers: Praveen Paritosh, Google; Matt Lease, UT Austin and Amazon; Mike Schaekermann, Amazon and University of Waterloo; Lora Aroyo, Google

Overview: Human annotated data is crucial for operationalizing empirical ways for evaluating, comparing, and assessing the progress of ML/AI research. As human annotated data represents the compass that the entire ML/AI community relies on, the human computation (HCOMP) research community has a multiplicative effect on the progress of the field. Optimizing the cost, size, and speed of collecting data has attracted significant attention by HCOMP and related research communities. In the first to market rush with data, aspects of maintainability, reliability, validity, and fidelity of datasets are often overlooked. We want to turn this way of thinking on its head and highlight examples, case-studies, methodologies for excellence in data collection. Data excellence happens organically due to appropriate support, expertise, diligence, commitment, pride, community, etc. We will invite speakers and submissions exploring such case studies in data excellence, focusing on empirical and theoretical methodologies for reliability, validity, maintainability, and fidelity of data. Goals of the workshop:

W2: Rigorous Evaluation of AI Systems

Date and Time October 25, 2020 2:00pm - 6:20pm CET

Organizers: Bernease Herman, Sarah Luger, Maria Stone, Kurt Bollacker

Overview: Human annotated datasets have emerged as the primary mechanism to operationalize empirical ways for evaluating, comparing, and assessing progress in machine learning, AI, and related fields. Crowdsourcing has helped solve the issue of scale in human annotated datasets while additionally has increased the impact of the variability of humans as instruments to provide these annotations. In recent research and applications where these evaluations rely on human annotated datasets or methodologies, we are interested in the meta-questions around characterization of those methodologies. Some of the expected activities in the workshop include:

  • Stay CONNECTED: HCOMP COMMUNITY

We welcome everyone who is interested in crowdsourcing and human computation to:

  • Join crowd-hcomp Google Group (mailing list) to post and receive crowdsourcing and human computation email announcements (e.g., calls-for-papers, job openings, etc.) including updates here about the conference. To subscribe send an email to crowd-hcomp+subscribe@googlegroups.com.
  • Check our Google Group webpage to view the archive of past communications on the HCOMP mailing list.
  • Keep track of our twitter hashtag #HCOMP2024.
  • Join the HCOMP Slack Community to be in touch with researchers, industry players, practitioners, and crowd workers around Human Computation and relevant topics.