Deep Learning School 2024 at Université Côte d'Azur

What is the Deep Learning School?

  • A 5-day program in July
  • 2:45-hour classes and 3:00-hour labs each day
  • 5 lectures will be delivered by high-profile speakers, internationally renowned in the field, who are dedicated to fostering responsible and sustainable AI for societal benefit
  • 5 "Expert Labs" on the topics of the lectures, supervised by subject-matter experts
The Deep Learning School has been existing since 2017. It was created in response to the intense and growing demand from professionals and amateurs alike to better understand, manipulate, and train in artificial intelligence.

Whether you are a researcher, an engineer, an expert in deep learning, or simply eager to learn more about these crucial methods at the core of modern AI, this program is designed for you!

2024 edition: practical information
  • Deep Learning School 2024 will take place from July 1st to July 5th.
  • Participants will have the opportunity to attend the program on the Campus SophiaTech, located in Sophia Antipolis, France.
  • All lectures and labs will be conducted in English.

Deep Learning School 2024 registration

What are the topics for Deep Learning School 2024?

This year, the Deep Learning School will address the hot topics of the moment in Deep Learning, of course, but also NLP and the SciML revolution.

Speakers will address these issues and the concerns they may generate (personal data management and retention, environmental impact, explicability/interpretability, health/biology, etc.) from a responsible, human-centric and ethical angle.

  • AI and ecology: The energy consumption of new types of Large Language Models, which is on everyone's minds, with Professor Emma Strubell, author of the first scientific article on this subject;
  • Ethical AI: Urgent issues related to biases and discrimination in machine learning models with Professor Golnoosh Farnadi, who works to advance the fields of algorithmic fairness and responsible artificial intelligence;
  • Machine Learning: Incorporating semantic concepts into machine learning models for vision with Professor Cynthia Rudin, who directs the Interpretable Machine Learning Lab at Duke University.
  • IA et physics: Presentation of recent advances in incorporating deep learning methods into the analysis of physical phenomena, in particular numerical simulations and PINNs with Professor Amir Barati Farimani.
  • AI and medicine and biology

Deep Learning School 2024 program

Monday, July 1

Prof. Cynthia Rudin for Deep Learning School 2024
Prof. Cynthia Rudin for Deep Learning School 2024

Interpretable AI by Prof. Cynthia Rudin, Duke University (USA)

Professor Cynthia Rudin holds the distinguished title of Earl D. McLean, Jr. Professor of Computer Science, Electrical and Computer Engineering, Statistical Science, Mathematics, and Biostatistics & Bioinformatics at Duke University, where she leads the Interpretable Machine Learning Lab.

Before joining Duke, Professor Rudin held academic positions at MIT, Columbia, and NYU. She earned her undergraduate degree from the University at Buffalo and completed her PhD at Princeton University.

In 2021, she won the Best OM Paper in OR Award, INFORMS, for best operations management paper published in Operations Research. In recognition of her outstanding contributions to the field, Professor Rudin received the 2022 Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity from the Association for the Advancement of Artificial Intelligence (AAAI). This accolade stands among the most prestigious in the realm of artificial intelligence, akin to renowned distinctions like the Nobel Prize and the Turing Award, and carries a substantial monetary reward.

Her accomplishments extend beyond this singular recognition. Professor Rudin is a three-time recipient of the INFORMS Innovative Applications in Analytics Award and was honored as one of the "Top 40 Under 40" by Poets and Quants in 2015. Additionally, Businessinsider.com lauded her as one of the 12 most impressive professors at MIT in the same year. She was named a 2022 Guggenheim fellow and holds fellowships from the American Statistical Association, the Institute of Mathematical Statistics, and AAAI.

Professor Rudin has chaired prominent committees, including the INFORMS Data Mining Section and the Statistical Learning and Data Science Section of the American Statistical Association. She has served on committees for DARPA, the National Institute of Justice, AAAI, and ACM SIGKDD, as well as three committees for the National Academies of Sciences, Engineering, and Medicine.

Her expertise is sought after, evident in her keynote and invited talks at esteemed conferences such as KDD, AISTATS, INFORMS, Machine Learning in Healthcare (MLHC), Fairness, Accountability and Transparency in Machine Learning (FAT-ML), ECML-PKDD, and the Nobel Conference. Moreover, her groundbreaking work has been featured in major news outlets including the NY Times, Washington Post, Wall Street Journal, and Boston Globe.

Tuesday, July 2

NLP & Frugal AI by Prof. Emma Strubell, Carnegie Mellon University (USA)

Wednesday, July 3

Prof. Golnoosh Farnadi for Deep Learning School 2024
Prof. Golnoosh Farnadi for Deep Learning School 2024

Responsible AI & Fairness by Prof. Golnoosh Farnadi, McGill University & Mila (Canada)

Dr. Golnoosh Farnadi serves as an Assistant Professor at McGill University's School of Computer Science and holds an Adjunct Professorship at the University of Montréal in Canada. Additionally, she is a visiting faculty researcher at Google, a core academic member at MILA (Quebec Institute for Learning Algorithms), and holds the Canada CIFAR AI chair. Dr. Farnadi is also the co-director of McGill’s Collaborative for AI & Society (McCAIS) and the founder and principal investigator of the EQUAL lab at Mila/McGill University.

The EQUAL lab (EQuity & EQuality Using AI and Learning algorithms) is a state-of-the-art research facility dedicated to advancing the fields of algorithmic fairness and responsible AI. With a mission to promote equity and equality in AI systems, the Equal Lab leverages advanced learning algorithms and AI technologies to address pressing issues surrounding bias and discrimination in AI and machine learning models.

Prior to her current roles, Dr. Farnadi held a similar position at HEC Montréal (University of Montréal's business school). From 2018 to 2020, she was a post-doctoral IVADO fellow at the University of Montréal and MILA, focusing on fairness-aware sequential decision making under uncertainty with professors Simon Lacoste-Julien and Michel Gendreau. From 2017 to 2018, she worked as a postdoctoral researcher in the Statistical Relational Learning Group (LINQS) under the supervision of Professor Lise Getoor at the University of California, Santa Cruz in the USA.

Dr. Farnadi earned her Ph.D. in Computer Science from KU Leuven and Ghent University in Belgium in 2017. Her dissertation focused on user modeling in social media under the guidance of Professors Martine de Cock and Marie-Francine Moens. During her Ph.D., she conducted research as a visiting scholar at UCLA, the University of Washington in Tacoma, Tsinguha University in China, and Microsoft Research in Redmond, USA.

In recognition of her contributions, Dr. Farnadi was awarded the Google Scholar Award and Facebook Research Award in 2021. She was also named one of the Rising Stars, a list of 20 promising new diverse talents in AI Ethics. In 2023, Dr. Farnadi received a Google award for inclusion research and was a finalist for the WAI responsible AI leader of the year award. Additionally, she was recognized as one of the 100 Brilliant Women in AI Ethics.

Thursday, July 4

AI & Physics/Numerical Simulation by Prof. Amir Barati Farimani, Carnegie Mellon University (USA)

Friday, July 5

AI and medicine and biology

Deep Learning School 2024 lecturer lineup

Prof. Cynthia Rudin

Prof. Cynthia Rudin’s Master class: Interpretable AI - Simpler Machine Learning Models for a Complicated World

Abstract:
While the trend in machine learning has tended towards building more complicated (black box) models, such models have not shown any performance advantages for many real-world datasets, and they are more difficult to troubleshoot and use. For these datasets, simpler models (sometimes small enough to fit on an index card) can be just as accurate. However, the design of interpretable models is quite challenging due to the "interaction bottleneck" where domain experts must interact with machine learning algorithms.

Prof. Cynthia Rudin will present a new paradigm for interpretable machine learning that solves the interaction bottleneck. In this paradigm, machine learning algorithms are not focused on finding a single optimal model, but instead capture the full collection of good (i.e., low-loss) models, which we call "the Rashomon set." Finding Rashomon sets is extremely computationally difficult, but the benefits are massive. Prof. Rudin will present the first algorithm for finding Rashomon sets for a nontrivial function class (sparse decision trees) called TreeFARMS. TreeFARMS, along with its user interface TimberTrek, mitigate the interaction bottleneck for users. TreeFARMS also allows users to incorporate constraints (such as fairness constraints) easily.

Prof. Cynthia Rudin will also present a "path," that is, a mathematical explanation, for the existence of simpler-yet-accurate models and the circumstances under which they arise. In particular, problems where the outcome is uncertain tend to admit large Rashomon sets and simpler models. Hence, the Rashomon set can shed light on the existence of simpler models for many real-world high-stakes decisions. This conclusion has significant policy implications, as it undermines the main reason for using black box models for decisions that deeply affect people's lives.

This is joint work with her colleagues Margo Seltzer and Ron Parr, as well as their exceptional students Chudi Zhong, Lesia Semenova, Jiachang Liu, Rui Xin, Zhi Chen, and Harry Chen. It builds upon the work of many past students and collaborators over the last decade.

Here are papers Prof. Cynthia Rudin will discuss in the talk:
Rui Xin, Chudi Zhong, Zhi Chen, Takuya Takagi, Margo Seltzer, Cynthia Rudin Exploring the Whole Rashomon Set of Sparse Decision Trees, NeurIPS (oral), 2022
Zijie J. Wang, Chudi Zhong, Rui Xin, Takuya Takagi, Zhi Chen, Duen Horng Chau, Cynthia Rudin, Margo Seltzer TimberTrek: Exploring and Curating Sparse Decision Trees with Interactive Visualization, IEEE VIS, 2022
Lesia Semenova, Cynthia Rudin, and Ron Parr On the Existence of Simpler Machine Learning Models. ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT), 2022
Lesia Semenova, Harry Chen, Ronald Parr, Cynthia Rudin A Path to Simpler Models Starts With Noise, NeurIPS, 2023

Prof. Emma Strubell

Saisissez ici le contenu ici

Prof. Golnoosh Farnadi

Prof. Golnoosh Farnadi’s Master class: Responsible AI & Fairness - Algorithmic Fairness: A Pathway to Developing Responsible AI Systems

Abstract:
The increased use of artificial intelligence and machine learning tools in critical domains such as employment, education, policing, and loan approval has led to concerns about potential harms and risks, including biases and algorithmic discrimination. As a result, a new field called algorithmic fairness has emerged to address these issues.

In this tutorial, I will stress the importance of fairness and provide an overview of techniques throughout the ML pipeline for ensuring algorithmic fairness. I will also explain why this can be a challenging task when considering other aspects of responsible AI, such as privacy. In conclusion, I will raise some open questions and suggest future directions for building a responsible AI system based on algorithmic fairness.

Prof. Amir Barati Farimani

Saisissez ici le contenu ici

Deep Learning School 2024 pricing information

For more information, visit our "Packages" and "À la carte" booking pages or contact the EFELIA team via email : violette.assati@univ-cotedazur.fr.