November 6 - Decision-making

Panelist: Ariel Procaccia

Harvard

Democracy and the Pursuit of Randomness


Abstract

Sortition is a storied paradigm of democracy built on the idea of choosing representatives through lotteries instead of elections. In recent years this idea has found renewed popularity in the form of citizens’ assemblies, which bring together randomly selected people from all walks of life to discuss key questions and deliver policy recommendations. A principled approach to sortition, however, must resolve the tension between two competing requirements: that the demographic composition of citizens’ assemblies reflect the general population and that every person be given a fair chance (literally) to participate. I will describe our work on designing, analyzing and implementing randomized participant selection algorithms that balance these two requirements. I will also discuss practical challenges in sortition based on experience with the adoption and deployment of our open-source system, Panelot.

Biography

Ariel Procaccia is Gordon McKay Professor of Computer Science at Harvard University. He works on a broad and dynamic set of problems related to AI, algorithms, economics, and society. He has helped create systems and platforms that are widely used to solve everyday fair division problems, resettle refugees, mitigate bias in peer review and select citizens’ assemblies. To make his research accessible to the public, he regularly writes opinion and exposition pieces for publications such as the Washington Post, Bloomberg, Wired and Scientific American. His distinctions include the Social Choice and Welfare Prize (2020), Guggenheim Fellowship (2018), IJCAI Computers and Thought Award (2015) and Sloan Research Fellowship (2015).

Panelist: Johan Ugander

Stanford

Harvesting randomness to understand computational social systems


Abstract

Modern social systems are increasingly infused with algorithmic components, designed to optimize various objectives under diverse constraints. Examples include school choice mechanisms to assign students to schools, peer review matching systems to assign papers to reviewers, or targeting strategies in social networks to seed product adoptions. In many such systems (and in all of these examples), such algorithms are commonly randomized, motivated by fairness, strategic, or efficiency considerations. In this talk, I will describe general principles for how such randomness can be harvested to make causal inferences not only about the effects of these systems on various outcomes, but also how the system would behave under alternative algorithmic designs. By applying these methods to computational social systems, we can gain a deeper understanding of the ways in which these systems operate and their impact on individuals and society as a whole.

Biography

Johan Ugander is an Associate Professor at Stanford University in the Department of Management Science & Engineering, within the School of Engineering. His research develops algorithmic and statistical frameworks for analyzing social networks, social systems, and other large-scale social and behavioral data. Prior to joining the Stanford faculty he was a postdoctoral researcher at Microsoft Research Redmond 2014-2015 and held an affiliation with the Facebook Data Science team 2010-2014. He obtained his Ph.D. in Applied Mathematics from Cornell University in 2014. His awards include a NSF CAREER Award, a Young Investigator Award from the Army Research Office (ARO), three Best Paper Awards (2012 ACM WebSci Best Paper, 2013 ACM WSDM Best Student Paper, 2020 AAAI ICWSM Best Paper), and the 2016 Eugene L. Grant Undergraduate Teaching Award from the Department of Management Science & Engineering.

Panelist: Amy Zhang

University of Washington

Establishing Alignment on Socially-Constructed Concepts


Abstract

Technologists and researchers are increasingly deploying systems that make determinations on ill-defined and complex concepts at scale. Is this piece of content `toxic' or `harmful'? Is this generative model’s output `appropriate' or `ethical'? To make such fuzzy concepts tractable for training and evaluating AI, steering generative model outputs, or conducting large-scale human-in-the-loop workflows, common strategies include 1) aligning judgments to the `average' human by collecting multiple human judgments and finding some center, and 2) aligning judgments to a more clearly defined standard by developing `constitutional' rubrics, instructions, or principles. However, these approaches fail to account for the socially constructed nature of such concepts. Instead of ignoring disagreement or arbitrarily choosing a concept's grounding, processes for alignment need to be able to capture misaligned interpretations and then resolve them through social processes for reconciliation. In this talk, I will present our lab's research on enabling collective alignment of socially-constructed concepts at scale, including novel annotation tools that capture calibrated uncertainty and offer targeted interventions, as well as a novel approach to alignment that grounds judgments to a body of precedents in addition to a constitution.

Biography

Amy X. Zhang is an assistant professor at University of Washington's Allen School of Computer Science and Engineering, where she leads the Social Futures Lab, a group dedicated to reimagining social and collaborative systems to empower people and improve society. Her work has received awards at ACM CSCW and ACM CHI, and she has been a Google Research Scholar, a Belfer Fellow at the ADL, a Berkman Klein Fellow, a Google PhD Fellow, and an NSF CAREER recipient and Graduate Research Fellow. Besides her work at UW, she is also currently a research consultant at AI2 with Semantic Scholar, and prior to UW, she was a Stanford postdoctoral researcher after completing a PhD at MIT CSAIL, where she received the George Sprowls Best Thesis Award at MIT in computer science. She received an MPhil in Computer Science at the University of Cambridge on a Gates Fellowship and a BS in Computer Science at Rutgers University, where she was captain of the Division I Women's tennis team.

November 7 - Policy

Panelist: Vili Lehdonvirta

Oxford Internet Institute

How and why the humans in human computation are trying to resist their computational organizers


Abstract

Human computation relies on human intelligence to perform tasks that are difficult for computers to accomplish. Insofar as generative AI systems rely on human inputs such as data labels and safety testing, they, too, are a type of human computation. The original ambition of human computation pioneers was to harness the "cognitive surplus” of average Internet users to supply the labour required to perform such tasks. But in reality most such work has come to be carried out by paid-for cognitive labourers working under a variety of arrangements, from microworkers accessing Mechanical Turk from their homes to content moderators sitting at a Samasource office in Kenya. In this talk I will use social and economic theory and empirical examples to discuss why this has happened, what problems this can entail from the workers’ perspective, and how the workers are attempting to respond with collective action against their computational organizers.

Biography

Vili Lehdonvirta is Professor of Economic Sociology and Digital Social Research at the Oxford Internet Institute, University of Oxford. He was the PI of the European Research Council funded iLabour research project. His co-authored articles “Digital labour and development: Impacts of global digital labour platforms on worker livelihoods” (2017) and “Good gig, bad gig: Autonomy and algorithmic control in the global gig economy” (2019) are the two most cited articles in the multidisciplinary field of gig economy research. The Online Labour Index he co-developed is now maintained by the International Labour Organization. He is a former member of the European Commission’s Expert Group on the Online Platform Economy and the High-Level Expert Group on Digital Transformation and EU Labour Markets. His new book Cloud Empires: How Digital Platforms Are Overtaking the State and How We Can Regain Control is published by MIT Press and shortlisted for the Association of American Publishers 2023 PROSE Award.

Panelist: Natali Helberger

University of Amsterdam

Regulators on fire – the AI Act after the launch of ChatGPT


Abstract

With the launch of ChatGPT last year and the ensuing debate about the benefits and potential risks of generative AI, also the work on the European AI Act shifted into a higher gear. The European Council and Parliament, working on their respective compromise texts, had to find ways to accommodate this new phenomenon. The attempts to adapt the AI Act went hand in hand with a lively public debate on what was so new and different about generative AI, whether it raised new, not yet anticipated risks, and how to best address a technology whose societal implications are not yet well understood. Most importantly, was the AI Act outdated even before is adopted? In my presentation I would like to discuss the different approaches that the Council and Parliament adopted to governing Generative AI, the most salient points of discussion and where we stand now with the AI Act.

Biography

Natali Helberger is Distinguished University Professor of Law and Digital Technology, with a special focus on AI, at the University of Amsterdam and a member of the Institute for Information Law (IViR). Her research on AI and automated decision systems focuses on its impact on society and governance. Helberger co-founded the Research Priority Area Information, Communication, and the Data Society, which has played a leading role in shaping the international discussion on digital communication and platform governance. She is a founding member of the Human(e) AI research program and leads the Digital Transformation Initiative at the Faculty of Law. Since 2021, Helberger has also been director of the AI, Media & Democracy Lab, and since 2022, scientific director of the Algosoc (Public Values in the Algorithmic Society) Gravitation Consortium. A major focus of the Algosoc program is to mentor and train the next generation of interdisciplinary researchers. She is a member of several national and international research groups and committees, including the Council of Europe's Expert Group on AI and Freedom of Expression. Natali Helberger has been included in the worldwide list of "100 brilliant women in AI ethics to follow."

Panelist: Geoff Mulgan

University College London

Theory and practice for mobilising collective intelligence to address collective problems


Abstract

I will talk about various initiatives that seek to deepen the theory of CI, while also having practical implications. The first is the theory of triggered hierarchies – and how intelligent systems triage issues, often moving them from routine or automated to more complex judgements. The second is the theory of knowledge atrophy – how fields and systems forget without significant investment of labour in orchestration, training and implementation. The third is the relationship of CI and collective imagination: how societies break free from assuming existing social arrangements are natural by mobilising alternatives. The fourth is the relationship of CI to theories of synthesis: how multiple intelligences can be synthesised for action. I will then link these to some projects on the frontiers of practice, specifically in relation to jobs markets (how to build a collective intelligence system for past, present and future of jobs and skills demands, drawing on work in several countries); using CI for the sustainable development goals (drawing on work with the UNDP); and the governance of science (how to mobilise multiple forms of intelligence to guide the governance of fields such as synthetic biology and quantum computing).

Biography

Sir Geoff Mulgan is Professor at University College London, in the Science, Technology, Engineering and Public Policy team. He was CEO of Nesta, the UK's innovation foundation from 2011-2019. From 1997-2004 Geoff had roles in UK government including director of the Strategy Unit and head of policy in the Prime Minister's office. He has been a reporter on BBC TV and radio and was the founder/co-founder of many organisations, including Demos, Uprising, the Social Innovation Exchange and Action for Happiness. He has worked with over 50 governments worldwide, the European Commission and the UN. He has a PhD in telecommunications, was visiting professor at LSE and Melbourne University, and senior visiting scholar at Harvard. Books include ‘The Art of Public Strategy’ (OUP), ‘Good and Bad Power’ (Penguin), ‘Big Mind: how collective intelligence can change our world’ (Princeton UP) and ‘Social innovation’ (Policy Press). His latest books are ‘Another World is Possible: How to Reignite Social and Political Imagination’, published by Hurst Publishers/OUP in 2022 and 'Prophets at a Tangent: how art shapes social imagination' published by Cambridge University Press in 2023. Twitter handle @geoffmulgan, website geoffmulgan.com. He is an editor in chief of the journal ‘Collective Intelligence’.

November 8 - AI

Panelist: Sherry Wu

Carnegie Mellon University

LLMs as Workers in Human-Computational Algorithms


Abstract

As AI systems (such as LLMs) rapidly advance, they can now perform tasks that were once exclusive to humans. This trend indicates a shift towards extensive collaboration with LLMs, where humans delegate tasks to them while focusing on higher-level skills unique to their capabilities. However, creating the right task delegation requires assessing LLMs’ ability to display human-alike behavior in a variety of tasks. In this talk, I will present our work on using LLMs to perform traditional human computational tasks, both simple ones like answering survey questions and more complex ones with multi-step workflows. I will compare human and LLMs’ sensitivities to instructions, stress the importance of enabling traditionally human-facing safeguards for LLMs (e.g., disagreement resolution mechanisms and interface-enforced interactions), and discuss the potential of training humans and LLMs with complementary skill sets.

Biography

Sherry Tongshuang Wu is an Assistant Professor in the Human-Computer Interaction Institute at Carnegie Mellon University. Her research lies at the intersection of Human-Computer Interaction and Natural Language Processing, and primarily focuses on how humans (AI experts, lay users, domain experts) can practically interact with (debug, audit, and collaborate with) AI systems. To this end, she has worked on assessing NLP model capabilities, supporting human-in-the-loop NLP model debugging and correction, as well as facilitating human-AI collaboration. She has authored award-winning papers in top-tier NLP, HCI and Visualization conferences and journals such as ACL, CHI, TOCHI, TVCG, etc. Before joining CMU, Sherry received her Ph.D. degree from the University of Washington and her bachelor degree from the Hong Kong University of Science and Technology, and has interned at Microsoft Research, Google Research, and Apple. You can find out more about her at http://cs.cmu.edu/~sherryw.

Panelist: César Hidalgo

Universities of Toulouse, Manchester, and Harvard

Why people judge humans differently from machines: The role of perceived agency and experience


Abstract

People are known to judge artificial intelligence using a utilitarian moral philosophy and humans using a moral philosophy emphasizing perceived intentions. But why do people judge humans and machines differently? Psychology suggests that people may have different mind perception models of humans and machines, and thus, will treat human-like robots more similarly to the way they treat humans. Here we present a randomized experiment where we manipulated people’s perception of machine agency (e.g., ability to plan, act) and experience (e.g., ability to feel) to explore whether people judge machines that are perceived to be more similar to humans along these two dimensions more similarly to the way they judge humans. We find that people’s judgments of machines become more similar to that of humans when they perceive machines as having more agency but not more experience. Our findings indicate that people's use of different moral philosophies to judge humans and machines can be explained by a progression of mind perception models where the perception of agency plays a prominent role. These findings add to the body of evidence suggesting that people’s judgment of machines becomes more similar to that of humans motivating further work on dimensions modulating people's judgment of human and machine actions.

Biography

Cesar Hidalgo is an American-Chilean physicist, professor, and author. He is known for his work in the field of complexity science and his contributions to understanding the dynamics of economic and social systems. Hidalgo developed the concept of "economic complexity," which measures the knowledge and capabilities embedded in a country's economy. Economic complexity has been used to show that economic development is not solely driven by the accumulation of capital and resources but also by the knowledge and diversity of a nation's productive activities. Hidalgo's multidisciplinary research has explored a number of other topics, including the use of artificial intelligence to evaluate the urban environment, the role of technology, time, and language on collective memory, the development of digital democracy tools and theories, and people's perception of artificial intelligence. He has authored several influential books on the topic, including "Why Information Grows," "The Atlas of Economic Complexity," and "How Humans Judge Machines."

Panelist: Helen Margetts

University of Oxford

How can data science, AI and social science research together help to improve the public sector?


Abstract

Data science and Artificial Intelligence have huge potential to improve policymaking, public services and governmental administration. But they also pose new risks and challenges, particularly for governance and regulation. This talk reports research carried out at the public policy progamme at the Alan Turing Institute, which aims to help the public sector to maximise the potential and minimise the risk of the latest generation of data-driven technologies, working directly with policymakers. Our multi-disciplinary team includes political scientists, economists, psychologists, and philosophers, as well as data scientists and AI specialists. Specifically, the talk presents two examples of research to (a) help increase productivity in public services and (b) build understanding of people’s experience of online harms and user controls to moderate online abuse. Both cases illustrate the importance of social science in AI research.

Biography

Helen Margetts is Professor of Society and the Internet at the University of Oxford, and Director of the Public Policy Programme at the Alan Turing Institute for Data Science and Artificial Intelligence. From 2011 to 2018, she was Director of the Oxford Internet Institute, a multi-disciplinary department of the University of Oxford, before which she was Professor of Political Science and Director of the School of Public Policy at UCL. She has researched and written extensively about the relationship between technology, politics, public policy and government including over 100 articles and six books, including a series of policy reports for the National Audit Office on Government on the Web (1999, 2002, 2007, 2009) and the book Digital Era Governance (Dunleavy and Margetts, 2006). Her latest book is Political Turbulence: How Social Media Shape Collective Action, which won the Political Studies Association’s W.J.Mackenzie prize of the Political Studies Association for best politics book in 2017.

  • Stay CONNECTED: HCOMP COMMUNITY

We welcome everyone who is interested in crowdsourcing and human computation to:

  • Join crowd-hcomp Google Group (mailing list) to post and receive crowdsourcing and human computation email announcements (e.g., calls-for-papers, job openings, etc.) including updates here about the conference. To subscribe send an email to crowd-hcomp+subscribe@googlegroups.com.
  • Check our Google Group webpage to view the archive of past communications on the HCOMP mailing list.
  • Keep track of our twitter hashtag #HCOMP2024.
  • Join the HCOMP Slack Community to be in touch with researchers, industry players, practitioners, and crowd workers around Human Computation and relevant topics.