Séminaires VOILA! : Visions Of InteLligence Artificielle

EFELIA Côte d'Azur, in conjunction with the Chair in AI and Innovation Economics, is launching its seminar series: Visions Of Artificial Intelligence, VOILA!

The VOILA! seminars explore the frontiers of AI in a way that is inclusive and open to all. They are designed to scientifically engage the broad UniCA community of teacher-researchers, students, and academic partners, and to provide food for thought or answers to the big questions spreading through society and therefore the university community on themes such as AI & Environment, AI & Work, AI&Education, AI & Media, AI & Law, AI & Creation, AI & Health, and more.

Each seminar consists of 2 parts: a lecture from 4pm to 5pm given by an expert in the field, followed by a round-table discussion from 5pm to 6pm led by UniCA researchers.

All seminars are accessible online, and by default will take place exclusively online, with occasional hybrid sessions (location indicated, and refreshments offered).
These sessions will also be recorded and available online afterwards. We are accompanying this launch with the publication of our new page : Comprendre l'Intelligence Artificielle (soon available in English).
Consisting of organized, explained and mainly freely accessible content, it is accessible to anyone wishing to understand what AI is, what it stands for, what is at stake, its limits and possibilities, in the broad socio-technical context in which this field in its own right is developing, and whose systems are changing society.

Registration is free (links below) but mandatory to receive the connection link the morning of the event.

April 11, 2024, 4pm-6pm: AI & Work

Presented by: Paola Tubaro, CNRS and ENSAE

Title: The global labor of AI: A journey from France to Brazil, Madagascar, and Venezuela

Abstract: Labor plays a major, albeit largely unrecognized role in the development of artificial intelligence (AI). Machine learning algorithms are predicated on data-intensive processes that rely on humans to execute repetitive and difficult-to-automate, but no less essential, tasks such as labeling images, sorting items in lists, and transcribing audio files. Networks of subcontractors often recruit ‘data workers’ to execute such tasks in lower-income countries with long-standing traditions of informality and less-regulated labor markets. I’ll highlight the working conditions and the profiles of data workers in Venezuela, Madagascar, and as an example of a richer country, France. The cross-country supply chains that link these data workers to core AI production sites maintain economic dependencies from colonial times and generate inequalities that compound with those inherited from the past.

Bio: Paola Tubaro is research director (Directrice de Recherche) at the National Centre for Scientific Research (CNRS), and professor at ENSAE. Trained as an economist before turning to sociology, she practices inter-disciplinary research that leverages synergies between sociology, social network analysis, and computer science. She currently studies the place of human labour in the global production networks of artificial intelligence, social inequalities in digital platform work, and the spread of online disinformation. She has also extensively published in the fields of methodology and data and research ethics.

Panel: Professor Tubaro's talk will be followed by a roundtable from 5pm to 6pm on the theme of AI & Work. The panelists will be Léonie Blaszyk-Niedergang (PhD student in Law, UniCA) and Gérald Gaglio (Professor of Sociology, UniCA). This talk and the following roundtable will be held in French; slides will be in English.

LIEN D'INSCRIPTION
Registration is now closed.
Replay of the conference "The global labor of AI: A journey from France to Brazil, Madagascar, and Venezuela" available on our YouTube channel.

April 18, 2024, 4pm-6pm: AI & Ethics

By : Giada Pistilli, Principal Ethicist, Hugging Face

Title : Exploring the Ethical Dimensions of Large Language Models Across Languages

Abstract : This talk introduces the study of the ethical implications of Large Language Models (LLMs) across multiple languages, grounded in the philosophical concepts of ethics and applied ethics. By employing a comparative analysis of both open and closed LLMs with prompts on sensitive issues translated into several languages, the study uses qualitative methods like thematic and content analysis to investigate LLM outputs for ethical considerations, particularly looking at instances where models refuse to respond or trigger content filters. Moreover, key areas of inquiry include the variability in LLM responses to identical ethical prompts across different languages, the effect of prompt framing on responses, and the uniformity of LLMs' refusals to address value-laden questions in varied linguistic and thematic contexts.

Bio : Giada Pistilli is a philosophy researcher specializing in ethics applied to Conversational AI. Her research mainly focuses on ethical frameworks, value theory, applied and descriptive ethics. After obtaining a master’s degree in ethics and political philosophy at Sorbonne University, she pursued her doctoral research in the same faculty. Giada is also the Principal Ethicist at Hugging Face, where she conducts philosophical and interdisciplinary research on AI Ethics and content moderation.

Panel : The round table from 5-6pm following the presentation will address the theme of AI and ethics. The panelists will be Frédéric Precioso, full professor in computer science and AI at UniCA, and Jean-Sébastien Vayre, associate professor in sociology at UniCA. Giada Pistilli's presentation will be in English, while the round table that follows will be in French.

LIEN D'INSCRIPTION
Registration is now closed.
Replay of the conference "Exploring the Ethical Dimensions of Large Language Models Across Languages" available on our YouTube channel.

May 23rd, 2024, 4pm-6pm: AI & Biases

Place : Campus SophiaTech and online

By : Sachil Singh, York University

Title : The Datafication of Healthcare: An eye on racial surveillance and unintended algorithmic biases

Abstract : Algorithms are increasingly used in healthcare to improve hospital efficiency, reduce costs, and better inform patient diagnoses and treatment plans. When implemented, it might appear to end-users that algorithms are abstract, autonomous, and detached from their designers. To the contrary, I present preliminary findings from ongoing interviews with data scientists to challenge perceptions of algorithms as objective, neutral, and unbiased technologies. I also share concerns with patient surveillance, particularly for the collection of race data that allegedly improves healthcare algorithms. When coupled with healthcare practitioners’ own racial biases, the compounding impact is one that can deepen already existing racial inequalities even beyond healthcare.

Bio : Dr. Sachil Singh is an Assistant Professor of physical culture and health technologies in datafied societies. Located in the Faculty of Health at York University in Toronto, his main areas of research are medical sociology, surveillance, algorithmic bias, and race. As a sociologist by training, Dr. Singh's current research examines data scientists’ unintended biases in their creation of healthcare algorithms. He is also Co-Editor of the highest ranked interdisciplinary journal Big Data & Society. His faculty profile can be found here.

Panel : The round table from 5pm to 6pm following the presentation will address the theme of AI and socio-technical systems. The panelists will be Anne Vuillemin, full professor in Science and Technology of Physical and Sporting Activities (STAPS) at UniCA, and Valentina Tirloni, associate professor in philosophy at UniCA. Sachil Singh's presentation will be in English, and the round table that follows will be in English and French.

LIEN D'INSCRIPTION

June 6, 2024, 4pm-6pm: AI & Language Technologies for Education

This session takes the form of a cineforum, where we encourage everyone in the audience to engage in exchanges and shared reflections. We will first watch Professor Emily M. Bender's recorded presentation on "Meaning making with artificial interlocutors and risks of language technology". Professor Bender is co-author of major contributions (such as [1]) identifying possible risks associated with large language models (such as ChatGPT) and possible measures to mitigate these risks. We will then discuss the video together, taking your questions and comments and providing further clarification and explanation from experts in large language models. We will propose a framework for exchanges around the applications of language models to teaching at university, in order to start thinking about how best to approach current moves to incorporate AI tools into teaching.

[1] E. M. Bender, T. Gebru, A. McMillan-Major, and S. Shmitchell, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?," in Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, Virtual Event Canada: ACM, Mar. 2021, pp. 610–623. doi: 10.1145/3442188.3445922.

LIEN D'INSCRIPTION

Leaflet of Season 1 of VOILA!
 / 1