Events

Katie Shilton – Trust, Trustworthiness and Participation: Findings From a Survey of Global Projects Navigating Participatory Forms of AI
February 18 @ 12:00 pm | Humanities 1, Room 210
This meeting is scheduled for February 18th (Tuesday) at noon in HUM 210 with guest speaker, Katie Shilton speaking on “Trust, Trustworthiness and Participation: Findings From a Survey of Global Projects Navigating Participatory Forms of AI.”
As the discourse on responsible and trustworthy AI intensifies, Participatory AI (PAI) presents a compelling approach to the democratic development of automated technologies. But how should we think about how and whether participatory methods increase trust in, and the trustworthiness of, AI systems? This talk will report on a systematic examination of the landscape of methods and theoretical lenses used in global participatory AI projects, and connect those methods and lenses to trust building. The talk will explore differences in theoretical frameworks, participation methods, and the details of shared tasks within the AI lifecycle across sectors and geographies. Our findings reveal an evolving definition of PAI, with actors implementing diverse methods and shared tasks. Focusing on shared tasks also provides a lens for analyzing how participation can build trust in, and trustworthiness of, AI systems. Our analysis reveals that participation alone is not necessarily a straightforward approach to building public trust in AI technologies, but that the promise of participation lies in trustworthiness by increasing the diversity of expertise engaged in alignment and decision-making within AI technologies.
Katie Shilton is a professor in the College of Information at the University of Maryland, College Park, and is currently visiting faculty in Computational Media at UCSC. Her research focuses on technology and data ethics. She is a co-PI of the NSF Institute for Trustworthy Artificial Intelligence in Law & Society (TRAILS) and a co-PI of the UMD Values-Centered Artificial Intelligence (VCAI) initiative. She was also recently the PI of the PERVADE project, a multi-campus collaboration focused on big data research ethics. Other projects include improving online content moderation with human-in-the-loop machine learning techniques and designing experiential data ethics education. Katie received a B.A. from Oberlin College, a Master of Library and Information Science from UCLA and a Ph.D. in Information Studies from UCLA.
The Humanities Institute Research cluster, “Humanities in the Age of AI,” is pleased to invite you to a series of meetings this winter quarter. The research cluster boasts a diverse group of core participants. This includes esteemed faculty members from various disciplines, graduate students representing politics, history, literature, philosophy, feminist studies, and film and visual studies, and undergraduate scholars from computer science, computational media, and creative writing.