EFELIA Côte d'Azur
What is the Deep Learning School?
- A 5-day program in July
- 2:45-hour classes and 3:00-hour labs each day
- 5 lectures will be delivered by high-profile speakers, internationally renowned in the field, who are dedicated to fostering responsible and sustainable AI for societal benefit
- 5 "Expert Labs" on the topics of the lectures, supervised by subject-matter experts
Whether you are a researcher, an engineer, an expert in deep learning, or simply eager to learn more about these crucial methods at the core of modern AI, this program is designed for you!
2024 edition: practical information
- Deep Learning School 2024 will take place from July 1st to July 5th.
- Participants will have the opportunity to attend the program on the Campus SophiaTech, located in Sophia Antipolis, France.
- All lectures and labs will be conducted in English.
Deep Learning School 2024 registration
Interested? Register right now for Deep Learning School 2024!
Reservations for the full week (conferences only or conferences + workshops)Daily bookings (à la carte formulas)
Registration for 3IA Côte d'Azur consortium members (students and staff), Université Côte d'Azur (students and staff) and AIDA (students and staff)
What are the topics for Deep Learning School 2024?
This year, the Deep Learning School will address the hot topics of the moment in Deep Learning, of course, but also NLP and the SciML revolution.
Speakers will address these issues and the concerns they may generate (personal data management and retention, environmental impact, explicability/interpretability, health/biology, etc.) from a responsible, human-centric and ethical angle.
- Frugal AI and NLP: Reducing the environmental footprint of large language models: challenges and solutions, with Professor Emma Strubell, who leads the Structure in(g) LAnguage LAB (SLAB) in the Language Technologies Institute at Carnegie Mellon University.
- Responsible AI and Equity: Algorithmic Fairness: A Pathway to Developing Responsible AI Systems with Professor Golnoosh Farnadi, who is a co-director of McGill’s Collaborative for AI & Society (McCAIS), and the founder and principal investigator of the EQUAL lab (EQuity & EQuality Using AI and Learning algorithms) at Mila/McGill University.
- Interpretable AI: Simpler Machine Learning Models for a Complicated World with Professor Cynthia Rudin, who directs the Interpretable Machine Learning Lab at Duke University.
- AI & Physics/Numerical Simulation: Robust Representation Learning with Transformers and LLMs for Engineering Problems with Professor Amir Barati Farimani, who heads the Mechanical and Artificial Intelligence Laboratory (MAIL) at Carnegie Mellon University.
- Foundation Models and Generative AI in Vision: From Transformers to Foundation models for Multimedia, and Generative AI with Matthieu Cord, full professor at Sorbonne Université, member of the ISIR laboratory, holder of the VISA-DEEP AI Chair and Scientific Director of Valeo AI.
Deep Learning School 2024 program
- Monday, July 1
-
Details below
Prof. Cynthia Rudin, Duke University (USA)
Professor Cynthia Rudin holds the distinguished title of Earl D. McLean, Jr. Professor of Computer Science, Electrical and Computer Engineering, Statistical Science, Mathematics, and Biostatistics & Bioinformatics at Duke University, where she leads the Interpretable Machine Learning Lab.
Before joining Duke, Professor Rudin held academic positions at MIT, Columbia, and NYU. She earned her undergraduate degree from the University at Buffalo and completed her PhD at Princeton University.
In 2021, she won the Best OM Paper in OR Award, INFORMS, for best operations management paper published in Operations Research. In recognition of her outstanding contributions to the field, Professor Rudin received the 2022 Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity from the Association for the Advancement of Artificial Intelligence (AAAI). This accolade stands among the most prestigious in the realm of artificial intelligence, akin to renowned distinctions like the Nobel Prize and the Turing Award, and carries a substantial monetary reward.
Her accomplishments extend beyond this singular recognition. Professor Rudin is a three-time recipient of the INFORMS Innovative Applications in Analytics Award and was honored as one of the "Top 40 Under 40" by Poets and Quants in 2015. Additionally, Businessinsider.com lauded her as one of the 12 most impressive professors at MIT in the same year. She was named a 2022 Guggenheim fellow and holds fellowships from the American Statistical Association, the Institute of Mathematical Statistics, and AAAI.
Professor Rudin has chaired prominent committees, including the INFORMS Data Mining Section and the Statistical Learning and Data Science Section of the American Statistical Association. She has served on committees for DARPA, the National Institute of Justice, AAAI, and ACM SIGKDD, as well as three committees for the National Academies of Sciences, Engineering, and Medicine.
Her expertise is sought after, evident in her keynote and invited talks at esteemed conferences such as KDD, AISTATS, INFORMS, Machine Learning in Healthcare (MLHC), Fairness, Accountability and Transparency in Machine Learning (FAT-ML), ECML-PKDD, and the Nobel Conference. Moreover, her groundbreaking work has been featured in major news outlets including the NY Times, Washington Post, Wall Street Journal, and Boston Globe. - Tuesday, July 2
-
Details below
Prof. Emma Strubell, Carnegie Mellon University (USA)
Professor Emma Strubell holds the position of Raj Reddy Assistant Professor in the Language.
Technologies Institute within the School of Computer Science at Carnegie Mellon University (CMU), and serves as a Visiting Scientist at the Allen Institute for Artificial Intelligence (AI2). She also holds a courtesy faculty appointment in the Department of Materials Science and Engineering at CMU.
Before joining CMU, Professor Strubell obtained her Ph.D. from the University of Massachusetts, Amherst, where she was advised by Andrew McCallum. Prior to her doctoral studies, she earned a B.S. in Computer Science from the University of Maine, with a minor in Mathematics. Throughout her career, she has held internships and research positions at prominent tech companies including Amazon, IBM, Meta, and Google.
Professor Strubell’s research lies at the intersection of natural language processing (NLP) and
machine learning. Her overarching goal is to bridge the gap between cutting-edge NLP techniques such as large language models and the diverse array of users who could benefit from these technologies, yet currently encounter barriers in their practical application. She leads a research group at CMU that aims to push the state-of-the-art towards this end, which manifests in a variety of more specific research challenges, including computation- and data-efficient machine learning for NLP, transfer learning and generalization, NLP for expert domains such as scientific articles and policy text, and ethical concerns in ML and NLP.
Professor Strubell is best known for her pioneering work characterizing the environmental impacts of AI. Her work has been recognized with a Madrona AI Impact Award, best paper awards at top NLP conferences, and cited in news outlets including the New York Times and Wall Street Journal. - Wednesday, July 3
-
Details below
Prof. Golnoosh Farnadi, McGill University & Mila (Canada)
Dr. Golnoosh Farnadi serves as an Assistant Professor at McGill University's School of Computer Science and holds an Adjunct Professorship at the University of Montréal in Canada. Additionally, she is a visiting faculty researcher at Google, a core academic member at MILA (Quebec Institute for Learning Algorithms), and holds the Canada CIFAR AI chair. Dr. Farnadi is also the co-director of McGill’s Collaborative for AI & Society (McCAIS) and the founder and principal investigator of the EQUAL lab at Mila/McGill University.
The EQUAL lab (EQuity & EQuality Using AI and Learning algorithms) is a state-of-the-art research facility dedicated to advancing the fields of algorithmic fairness and responsible AI. With a mission to promote equity and equality in AI systems, the Equal Lab leverages advanced learning algorithms and AI technologies to address pressing issues surrounding bias and discrimination in AI and machine learning models.
Prior to her current roles, Dr. Farnadi held a similar position at HEC Montréal (University of Montréal's business school). From 2018 to 2020, she was a post-doctoral IVADO fellow at the University of Montréal and MILA, focusing on fairness-aware sequential decision making under uncertainty with professors Simon Lacoste-Julien and Michel Gendreau. From 2017 to 2018, she worked as a postdoctoral researcher in the Statistical Relational Learning Group (LINQS) under the supervision of Professor Lise Getoor at the University of California, Santa Cruz in the USA.
Dr. Farnadi earned her Ph.D. in Computer Science from KU Leuven and Ghent University in Belgium in 2017. Her dissertation focused on user modeling in social media under the guidance of Professors Martine de Cock and Marie-Francine Moens. During her Ph.D., she conducted research as a visiting scholar at UCLA, the University of Washington in Tacoma, Tsinguha University in China, and Microsoft Research in Redmond, USA.
In recognition of her contributions, Dr. Farnadi was awarded the Google Scholar Award and Facebook Research Award in 2021. She was also named one of the Rising Stars, a list of 20 promising new diverse talents in AI Ethics. In 2023, Dr. Farnadi received a Google award for inclusion research and was a finalist for the WAI responsible AI leader of the year award. Additionally, she was recognized as one of the 100 Brilliant Women in AI Ethics. - Thursday, July 4
-
Details below
Prof. Amir Baratir Farimani, Carnegie Mellon University (USA)
Amir Barati Farimani earned his Ph.D. in mechanical science and engineering from the University of Illinois at Urbana-Champaign (USA) in 2015. His doctoral research was on using computational tools such as molecular dynamics simulation to investigate membrane nanopores for water desalination and DNA detection.
Following his Ph.D., he joined Professor Vijay Pande’s lab at Stanford University (USA) as a postdoctoral researcher. During this tenure, he integrated machine learning techniques with molecular dynamics simulations to explore the conformational changes of G-Protein Coupled Receptors (GPCRs), with a specific focus on Mu-Opioid Receptors to elucidate their free energy landscape, activation mechanism, and pathway.
Currently, Dr. Barati Farimani heads the Mechanical and Artificial Intelligence Laboratory (MAIL) at Carnegie Mellon University (USA). The lab's research interests lie at the intersection of machine learning, data science, and molecular dynamics simulations applied to health and bioengineering challenges. Embracing a multidisciplinary approach, the lab brings together researchers from diverse backgrounds including mechanical engineering, computer science, bioengineering, physics, materials science, and chemical engineering.
The lab's overarching mission is to advance science and engineering through the integration of state-of-the-art machine learning algorithms. Traditional engineering methodologies rely solely on physics-based rules, often overlooking the inherent noise and stochastic nature of systems. To address this, the lab pioneers’ algorithms capable of inferring, learning, and predicting mechanical systems based on data. By blending physics with machine learning, they develop more precise predictive models. Multi-scale simulations, including Computational Fluid Dynamics (CFD), Molecular Dynamics (MD), and Density Functional Theory (DFT), are utilized to generate the necessary data for these data-driven models. - Friday, July 5
-
Details below
Prof. Matthieu Cord, Sorbonne Université (France), Scientific Director of Valeo AI
Matthieu Cord is a professor at Sorbonne University and scientific director of valeo.ai.
His academic research is carried out at the Institute of Intelligent Systems and Robotics (ISIR), where he heads the Machine Learning team (MLIA).
He currently holds a chair in the national AI program at Sorbonne's SCAI center, VISA-DEEP.
He is an honorary member of the Institut Universitaire de France and served for three years as an AI expert at the CNRS and ANR.
His research focuses on computer vision, machine learning and artificial intelligence. He is the author of over 200 international scientific publications on semantic visual analysis and multimodal vision and language understanding.
Deep Learning School 2024 lecturer lineup
- Prof. Cynthia Rudin
-
Prof. Cynthia Rudin’s Master class: Interpretable AI - Simpler Machine Learning Models for a Complicated World
Abstract:
While the trend in machine learning has tended towards building more complicated (black box) models, such models have not shown any performance advantages for many real-world datasets, and they are more difficult to troubleshoot and use. For these datasets, simpler models (sometimes small enough to fit on an index card) can be just as accurate. However, the design of interpretable models is quite challenging due to the "interaction bottleneck" where domain experts must interact with machine learning algorithms.
Prof. Cynthia Rudin will present a new paradigm for interpretable machine learning that solves the interaction bottleneck. In this paradigm, machine learning algorithms are not focused on finding a single optimal model, but instead capture the full collection of good (i.e., low-loss) models, which we call "the Rashomon set." Finding Rashomon sets is extremely computationally difficult, but the benefits are massive. Prof. Rudin will present the first algorithm for finding Rashomon sets for a nontrivial function class (sparse decision trees) called TreeFARMS. TreeFARMS, along with its user interface TimberTrek, mitigate the interaction bottleneck for users. TreeFARMS also allows users to incorporate constraints (such as fairness constraints) easily.
Prof. Cynthia Rudin will also present a "path," that is, a mathematical explanation, for the existence of simpler-yet-accurate models and the circumstances under which they arise. In particular, problems where the outcome is uncertain tend to admit large Rashomon sets and simpler models. Hence, the Rashomon set can shed light on the existence of simpler models for many real-world high-stakes decisions. This conclusion has significant policy implications, as it undermines the main reason for using black box models for decisions that deeply affect people's lives.
This is joint work with her colleagues Margo Seltzer and Ron Parr, as well as their exceptional students Chudi Zhong, Lesia Semenova, Jiachang Liu, Rui Xin, Zhi Chen, and Harry Chen. It builds upon the work of many past students and collaborators over the last decade.
Here are papers Prof. Cynthia Rudin will discuss in the talk:
Rui Xin, Chudi Zhong, Zhi Chen, Takuya Takagi, Margo Seltzer, Cynthia Rudin Exploring the Whole Rashomon Set of Sparse Decision Trees, NeurIPS (oral), 2022
Zijie J. Wang, Chudi Zhong, Rui Xin, Takuya Takagi, Zhi Chen, Duen Horng Chau, Cynthia Rudin, Margo Seltzer TimberTrek: Exploring and Curating Sparse Decision Trees with Interactive Visualization, IEEE VIS, 2022
Lesia Semenova, Cynthia Rudin, and Ron Parr On the Existence of Simpler Machine Learning Models. ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT), 2022
Lesia Semenova, Harry Chen, Ronald Parr, Cynthia Rudin A Path to Simpler Models Starts With Noise, NeurIPS, 2023“Since AI models are becoming incredibly complex, it is worth asking whether that extra complexity always leads to increased performance. In fact, it often does not! For tabular data with uncertain outcomes - like medical data, criminal justice data, and loan data - very simple models perform just as well. I’ve been curious about why that is, and I’m excited to speak about our work on this important topic in July.”
Prof. Cynthia Rudin - Prof. Emma Strubell
-
Prof. Emma Strubell’s Master class: NLP & Frugal AI - Reducing the environmental footprint of large language models: challenges and solutions
Abstract:
Large language models (LLMs) have emerged as a potentially transformative AI technology, enabling new capabilities for multimodal data analysis and generation. However, these exciting advances come at an unprecedented computational cost that has grown in step with those capabilities, with corresponding energy and monetary demands. The increased computational demands of this technology not only limit who has access to develop, use, and shape uses of this technology, but may also have negative implications for the environment due to the increased greenhouse gas emissions required to develop and deploy these models.
In this tutorial, I will first characterize the complex relationship between AI and the environment, and outline what is known about the environmental footprint of LLMs. Then, I will describe a variety of approaches that can be taken to reduce the environmental footprint of LLMs, including reducing the computational burden of individual LLM workloads, emissions-aware scheduling in datacenters, and applications of machine learning to climate change adaptation and mitigation.“Large language models have great potential to help address substantial societal challenges, including climate change. However, these same technologies come with a high carbon footprint that must be drastically reduced over a short time period in order to meet climate goals. I’m excited to have the opportunity to present on the complex relationship between LLMs and the environment, and describe some of our recent work characterizing and mitigating the negative environmental impacts of LLMs at the Deep Learning School in July!”
Prof. Emma Strubell - Prof. Golnoosh Farnadi
-
Prof. Golnoosh Farnadi’s Master class: Responsible AI & Fairness - Algorithmic Fairness: A Pathway to Developing Responsible AI Systems
Abstract:
The increased use of artificial intelligence and machine learning tools in critical domains such as employment, education, policing, and loan approval has led to concerns about potential harms and risks, including biases and algorithmic discrimination. As a result, a new field called algorithmic fairness has emerged to address these issues.
In this tutorial, Prof. Farnadi will stress the importance of fairness and provide an overview of techniques throughout the ML pipeline for ensuring algorithmic fairness. She will also explain why this can be a challenging task when considering other aspects of responsible AI, such as privacy. In conclusion, Golnoosh Farnadi will raise some open questions and suggest future directions for building a responsible AI system based on algorithmic fairness.“Algorithmic fairness isn't just about eliminating bias; it's about prioritizing fairness at every stage of development and creating systems that reflect the diverse and complex world we live in.”
Prof. Golnoosh Farnadi - Prof. Amir Barati Farimani
-
Prof. Amir Barati Farimani's Master Class: Robust Representation Learning with Transformers and LLMs for Engineering Problems
Abstract:
With the rise of Artificial Intelligence (AI) and Machine Learning (ML) in recent years, many complex problems in vision and computer science have been solved that were intractable for decades. In mechanical engineering (ME), complex problems still exist that conventional techniques could not offer viable solutions. Recent advances in AI have provided us with opportunities to merge and apply them to challenges in engineering; however, to accurately model and predict an engineering system, the representation and formulation of that problem into AI frameworks remain a challenge. The domain knowledge of science and engineering is needed to represent and beneficially use AI algorithms to achieve viable solutions.
To this end, I will talk about how effectively different areas of engineering can take advantage of AI to find solutions by integrating the physics and engineering domain knowledge. I will focus on examples in transport phenomena, additive manufacturing, and material discovery. Additionally, I will discuss the emerging role of Large Language Models (LLMs) in scientific discovery. These models can propose plausible scientific hypotheses and enhance data-driven discoveries by leveraging their extensive embedded knowledge.
I will illustrate how LLMs can be integrated with domain-specific knowledge in engineering to improve the accuracy and efficiency of tasks such as symbolic regression and equation discovery. Finally, I will demonstrate how modern deep learning models can be used in solid mechanics, CFD, and additive manufacturing, and how the integration of chemistry and physics into graph convolutional neural networks can enhance the accuracy of material property prediction. - Prof. Matthieu Cord
-
Prof. Matthieu Cord’s Master class: Foundation models - From image to language and vice versa
Abstract:
Large Language Models (LLMs) have continually impressed the global community, revealing remarkable capabilities that become even more apparent as these models scale up.
In this seminar, I will first explore the field of computer vision, focusing on transformer models such as Vision Transformers (ViT), which are achieving outstanding performance on challenging vision benchmarks.
We will then analyze the potential interactions between ViTs and LLMs. Additionally, we will examine how to leverage pre-trained models for vision-language tasks and the computational effort required to adapt unimodal models for multimodal tasks.“Is an image only a thousand words?”
Prof. Matthieu Cord
Deep Learning School 2024 pricing information
For more information, visit our booking pages:
Packages
À la carte formulas
Registration for 3IA Côte d'Azur consortium members (students and staff), Université Côte d'Azur (students and staff) and AIDA (students and staff)
If you have any questions, contact the EFELIA team via email : violette.assati@univ-cotedazur.fr.