Keynotes

Speaker: Mounia Lalmas

Spotify

Engagement, Metrics and Personalisation at Scale


Date: 26.10.2020

Talk Abstract

User engagement plays a central role in companies and organisations that aimed to personalise their online services. A main challenge is to leverage knowledge about the online interaction of users to understand what engages them. A common way that engagement is measured and understood is through the definition and development of metrics of user satisfaction, which can act as a proxy of user engagement. This talk will present various works and personal thoughts on how to measure user engagement. It will discuss the definition and development of metrics of user satisfaction that can be used as a proxy of user engagement. An important message is that, for personalisation to work both in the short and the long-term, it is important to consider the heterogeneity of both user and content to formalise the notion of engagement, and in turn design the appropriate metrics to capture these and optimize for. One way to achieve this is to follow these four steps: 1) Understanding intents; 2) Optimizing for the right metric; 3) Acting on segmentation; and 4) Thinking about diversity. The talk will illustrate these steps with some of the research at Spotify.

Speaker Biography

Mounia Lalmas is a Director of Research at Spotify, and the Head of Tech Research in Personalization. Mounia also holds an honorary professorship at University College London. Before that, she was a Director of Research at Yahoo, where she led a team of researchers working on advertising quality for Gemini, Yahoo's native advertising platform. She also worked with various teams at Yahoo on topics related to user engagement in the context of news, search, and user generated content. Prior to this, she held a Microsoft Research/RAEng Research Chair at the School of Computing Science, University of Glasgow. Before that, she was Professor of Information Retrieval at the Department of Computer Science at Queen Mary, University of London. Her work focuses on studying user engagement in areas such as native advertising, digital media, social media, search, and now audio. She has given numerous talks and tutorials on these and related topics, including a WWW 2019 tutorial on 'Online User Engagement: Metrics and Optimization', which will also be given at KDD 2020. She is regularly a senior programme committee member at conferences such as WSDM, KDD, WWW and SIGIR. She was co-programme chair for SIGIR 2015, WWW 2018 and WSDM 2020. She is also the co-author of a book written as the outcome of her WWW 2013 tutorial on 'measuring user engagement.

Speaker: Anna Ridler

Artist

Using Machine Learning in an Artistic Context: Classification and its Consequences


Date: 27.10.2020

Talk Abstract

As a visual artist, datasets are a core part of both my process and output. I will talk about what I believe at dataset to be; the compiling and training machine learning on datasets; exploring how datasets are created and possibilities for overcoming bias the different ways they can be constructed and some of the the issues, problems, solutions with working with them. In particular I will be focusing on classification and the consequences of using it in datasets, machine learning, and artistic works.

Speaker Biography

Anna Ridler (b. 1985, UK) is an artist and researcher. She has exhibited at institutions such as the V&A Museum, Ars Electronica, HeK Basel, Impakt and the Barbican Centre and has degrees from the Royal College of Art, Oxford University and University of Arts London. She was a 2018 EMAP fellow and was listed by Artnet as one of nine “pioneering artists” exploring AI’s creative potential. She is interested in working with collections of information, particularly self-generated data sets, to create new and unusual narratives in a variety of mediums, and what happens when things cannot fit into discrete categories. She is currently interested in the intersection of machine learning and nature and what we can learn from history.

Speaker: Chris Welty

Google Research

Is That Significant?


Date: 28.10.2020

Talk Abstract

One of the most frustrating aspects of following empirical progress in AI, both at the macro (community) or micro (one's own experiments) level, is determining whether an improvement in some metric represents a significant change. As evidenced by its widespread absence in publications, the calculation of confidence intervals is simply not a part of the usual training in AI. Instead, we frequently see tables of lackluster performance results across a variety of datasets and metrics that show some gains and some losses. Many scientists make basic flaws in constructing their null hypotheses, indicating their error bars would have been wrong even if they'd been provided. Most toolsets that provide significance tests obscure the actual null hypothesis, leaving even the most knowledgeable scientists in the dark. This has not stopped leaderboards from popping up, such as the SQuAD leaderboard, that simply rank systems by their metric score with no indication if the ranking difference is significant. In this talk I will outline a reusable crowd-based approach toward evaluation and significance testing in AI, designed to answer the question, to what degree is a measured difference between two systems (or two versions of the same system) significant?

Speaker Biography

Dr. Chris Welty is a Sr. Research Scientist at Google in New York. He is the co-creator of CrowdTruth, a method for gathering annotation data from the crowd that rejects the usual binary distinction of truth. At Google his research has influenced several products you've probably used, such as product availability at stores, dish availability at restaurants and restaurant price levels on Google Maps. Before Google, Dr. Welty was a member of the technical leadership team for IBM’s Watson – the question answering computer that defeated the all-time best Jeopardy! champions in a widely televised contest. He is a recipient of the AAAI Feigenbaum Prize for his work. He began his career in the area of Ontology in AI, and played a seminal role in the development of the Semantic Web. He is widely known as the AI Bookie - co-editor of the AI Magazine column that curates scientific bets about AI's future. Dr. Welty holds a Ph.D. in Computer Science from Rensselaer Polytechnic Institute, where he worked, taught, and helped to form NYSERNet, and then PSINet, the first commercial internet access provider. He has also worked at AT&T Bell Labs, Vassar College and the Italian National Research Council (CNR).

Speaker: Julia Noordegraaf

University of Amsterdam

Amplified Intelligence: Uniting Human and Machine Intelligence in Cultural Heritage Workflows


Date: 29.10.2020

Talk Abstract

In the field of cultural heritage, artificial intelligence technologies are increasingly adopted to extract and interpret the ‘Big Data of the Past’ – the historical information that forms the basis of a society’s collective memory and identity. At the same time, these technologies still struggle with the complex nature of historical data, in particular when it comes to interpreting its meaning and relevance. Hence, the implementation of such computational techniques is often combined with facilities for engaging human intelligence, in the form of crowdsourcing or citizen science projects. This lecture reflects on the opportunities of AI and Citizen Science workflows for accessing, interpreting and reusing cultural heritage data, assessing the extent to which these solutions are responsive to the complexity of human culture and meaningful to its users. A range of examples from the practice of (audiovisual) archives will be discussed, including human-machine workflows for transcribing and understanding written and spoken text, analyzing images and generating new objects at the Amsterdam City Archives, EYE Film Museum, BBC, and the Netherlands Institute for Sound and Vision. Areas of use include the cultural heritage sector itself (European Time Machine project), the creative industries (AI generated compilation films at EYE and BBC) and digital humanities research (human-computer workflows in the CLARIAH Media Suite). Building on the history of ideas about how computational technologies can be assistive to human intelligence, I will argue we need to design frameworks for an ‘amplified intelligence’, a collaboration between humans and machines that is responsive to the cultural values our archives, museums and libraries uphold.

Speaker Biography

Julia Noordegraaf is professor of Digital Heritage in the department of Media Studies at the University of Amsterdam. Within the Faculty of Humanities she acts as director of the digital humanities research program and lab Creative Amsterdam (CREATE) that studies the history of urban creativity using digital data and methods. Noordegraaf’s research focuses on the preservation, exhibition and reuse of audiovisual and digital cultural heritage. She has published, amongst others, the monograph Strategies of Display (2004/2012) and, as principal editor, Preserving and Exhibiting Media Art (2013) and acts as principal editor of the Cinema Context database on Dutch film culture. She currently leads research projects on the conservation of digital art (in the Horizon 2020 Marie Curie ITN project NACCA) and on the reuse of digital heritage in data-driven historical research (besides CREATE in the project Virtual Interiors as Interfaces for Big Historical Data Research). She is a former fellow of the Netherlands Institute for Advanced Study in the Humanities and Social Sciences and acts as board member for Media Studies in CLARIAH, the national infrastructure for digital humanities research, funded by the Netherlands Organization for Scientific Research, NWO. Noordegraaf currently coordinates the realization of the Amsterdam Time Machine and acts as Vice President of the European Time Machine Organization that aims to build a simulator for 5.000 years of European history.

Speaker: Pietro Perona

Amazon

Visipedia: combining people, data and machines to distill and share knowledge


Date: 29.10.2020

Talk Abstract

Visipedia is a network of people and machines designed to harvest and organize visual information and make it accessible to anyone anywhere. I will explore technical challenges arising from Visipedia and discuss their implications for computer vision, machine learning, human-machine systems and visual psychology. I will discuss the case study, an automated field guide to the birds of North America. Key contributions include a method for characterizing the multidimensional wisdom of crowdworkers and a classification pipeline combining input from humans and machines. I will conclude by discussing several open issues encompassing algorithm development to community engagement.

Speaker Biography

Dr. Pietro Perona is the Allen E. Puckett Professor of Electrical Engineering at Caltech. He directs Computation and Neural Systems (www.cns.caltech.edu), a PhD program centered on the study of biological brains and intelligent machines. Professor Perona’s research centers on vision. He has contributed to the theory of partial differential equations for image processing and boundary formation, and to modeling the early visual system’s function. He is currently interested in visual categories and visual recognition.

  • Stay CONNECTED: HCOMP COMMUNITY

We welcome everyone who is interested in crowdsourcing and human computation to:

  • Join crowd-hcomp Google Group (mailing list) to post and receive crowdsourcing and human computation email announcements (e.g., calls-for-papers, job openings, etc.) including updates here about the conference. To subscribe send an email to crowd-hcomp+subscribe@googlegroups.com.
  • Check our Google Group webpage to view the archive of past communications on the HCOMP mailing list.
  • Keep track of our twitter hashtag #HCOMP2024.
  • Join the HCOMP Slack Community to be in touch with researchers, industry players, practitioners, and crowd workers around Human Computation and relevant topics.