Understanding Artificial Intelligence

​​Contents
  • ​Foreword
  • ​Understanding the diversity of AI methods, their main principles and their historical development
  • ​The revolution of the 2010s: learning representations with Deep Learning
  • ​Generative AI
  • ​Limitations of ML models and the issue of biases
  • Social stereotypes reproduced by AI models
  • A need for ethical reflection by all?
  • ​A few resources to help you move beyond the dominant discourse and take a critical look
Foreword: better understanding artificial intelligence


Foreword: better understanding the challenges of artificial intelligence

EFELIA Côte d'Azur is a government-funded project to develop AI training for students from bac-3 to bac+8 and through continuing education for staff at partner establishments, including the university, companies, and secondary schools.

By AI training, we mean:

1. Understanding the operating principles of these tools, and
2. Learning how to use the new tools corresponding to the AI systems on the market for different applications, including Chat GPT for example. 

 

At EFELIA Côte d'Azur, our approach focuses on 1. Why?


If we draw a parallel with other tools that are socio-technical systems, such as cars, it's true that to get your driver’s license, you don't need to take an exam on car mechanics. When you use a computer, you don't need to know anything about electronics.

So why do we at EFELIA want you to understand AI beyond the use of specific systems? To stretch the metaphor, because AI systems are neither a car nor a computer: when you hit the brakes, it might not brake, and when you press a key, it might not display the same thing to two different people.

Machine learning systems should not be used for critical applications involving human life, and when they are, the consequences can be dramatic, as has already been the case [justice system, allocation of hospital beds, detection of benefit fraud].

These systems are not certified in the same way as cars, and there is no "technical inspection", given the complexity of the social environment in which they can be deployed. Their reliability in the real world is therefore difficult to assess prior to deployment, unlike other systems whose internal workings, however complex, we can only learn to use.
 

The consequence of the complexity of artificial intelligence?


It is at best insufficient and at worst dangerous, for us or our organizations (its learning quality, its content production or process optimization quality, its legal responsibility) or for others (victims of bias, sexist or racist among others, obvious or subtle, automated and amplified), to think that it is possible to use these AI systems without understanding their production contexts, objectives, operating modes, flaws and limits.

Our aim is therefore to enable each individual to develop a real understanding of the field and the issues involved, so as to grasp the new possibilities of these approaches as fully as possible, while at the same time recognizing their limits and implications.

You won't be replaced by AI systems, but by people who know AI. Because of their very strong limits, even if not obvious at first glance, AI systems will evolve very rapidly, and acquiring knowledge beyond know-how will enable you to adapt as best you can to these rapid and inevitable changes.

EFELIA Côte d'Azur is developing all its actions with these values in mind. We are sharing a carefully selected and organized set of public resources here so that anyone who wants to better understand what we're talking about - AI beyond the mainstream discourse - can easily begin that understanding.
​Understanding the diversity of AI methods, their main principles and their historical development

AI is a field born at the intersection of computer science, mathematics, and neuroscience in the mid-20th century. The aim is to design computational approaches (algorithms1), i.e. automated by computers, to tasks hitherto performed only by humans. These tasks may involve elaborate cognitive problems, such as proving mathematical theorems, or problems that are solved subconsciously and automatically by living beings, such as moving around, or even walking. This page retraces the history and the 2 main currents of thought and approach in AI:

History of AI (English)

These trends, known as top-down AI and bottom-up AI, have given rise to different methodological families in AI:
  • symbolic AI, which was the dominant approach between the 1950s and 1990s and assumes the existence of high-level symbols to represent the building blocks of reasoning (fields of reasoning, logic programming and rule systems),
  • and so-called statistical AI, or data-driven AI, which designs and implements statistical tools to identify patterns (associations, or "correlations" in statistics) in data, enabling them to be linked to a category, or re-generated. This last family of methods has been the most successful since the 2000s: it's the field of machine learning, with artificial neural networks in particular.

In addition to the courses offered at the university, and in particular those designed for UniCA staff and AI transversal skills for all students in L1 in 2024, and L2 and L3 in 2025, here are two high quality, accessible sources:

A third, more advanced resource on AI is:


1 Definition taken from the Montreal Declaration: An algorithm is a method of solving problems by means of a finite, unambiguous series of operations. More precisely, in an artificial intelligence context, it is the series of operations applied to input data to obtain the desired result.
​The revolution of the 2010s: learning representations with Deep Learning

For many problems with so-called unstructured data - images or text, for example, as opposed to databases - symbolic AI methods came up against their inability to grasp the variability of the real world: a cat in an image may appear in various contexts, at various angles, in various positions, in various colors, and so on. In the same way, the same fact can be stated in a great number of ways, let alone a sentiment in a text.

Until 2005, machine learning approaches to image classification involved a pre-extraction, based on human assumptions, of features relevant to classification. For example, to recognize buildings in aerial images, we could pre-extract (list) all contours (lines) in an image, and then apply an ML approach mixing statistics and optimization to determine whether a building is present (or whether it is just a road, for example).

Although a part of statistical AI methods, these approaches along with symbolic AI shared the fact that they were based on human reasoning. It wasn't symbols that were pre-determined as useful for performing the task, but descriptors (a more flexible version than symbols, as it were) that were hoped to be useful for the task, and which were pre-extracted and then considered as input to a machine learning model (algorithm) (such as SVMs or boosting).

This pre-determination was a limiting element for the effectiveness of these methods: other descriptors, or "representations", might be more relevant (imagine if you were trying to describe what visually distinguishes a cat from a dog, describing the paws, muzzle and coat might be insufficient, even if it were partially automated).

Methods based on Deep Learning have been able to overcome this limitation. In the UniCA courses, we use illustrations to explain the principle of identifying relevant patterns by learning them and checking their presence or absence in the image by correlation. These patterns are identified by "training" or "learning", i.e. by mathematical optimization (function minimization) to minimize the error made by the model when it predicts what the image contains. To calculate this error (simple subtraction), a human must have indicated for each training image whether or not it contains a cat or a dog (or any other class). You can find such an explanation in a simple and concise case here :


Learning to identify relevant patterns for classification

The keys to the success of these new machine learning models (deep learning being part of machine learning) have been:
  • more efficient types of artificial neural networks to learn the right data representations for the targeted image classification tasks,
  • greater computing power thanks to parallelization of calculations on graphics processing units (GPUs),
  • and, last but not least, the sheer volume of annotated data: hundreds of thousands of images annotated by humans.
Generative AI

In the case of images, the representations learned are in terms of the presence or absence of patterns, ranging from simple shapes assembled into more complex forms over the layers of the neural network, and sometimes interpretable by eye, as illustrated here :

Patterns chosen as relevant to classify

In the case of text, models also learn to represent words by finding ("optimizing"/"learning") a numerical representation (array of numbers, or vector) of a word from which to calculate the probability of other words being its neighbors. Training then consists of maximizing the probability of words appearing close to each other, for words that are actually neighbors in the texts used to train the model. This principle is explained intuitively on pages 7 to 9 of this report:

Learning numerical representations of words (pages 7-9)

and in this page.

This approach results in a language model: a model that represents each word numerically in such a way that it can reproduce the same probability of joint occurrence, or "co-occurrences", as in the texts used to train ("optimize" / "parameterize") the model. Unlike the problem of image classification, which requires images annotated by humans to indicate whether each one contains an element of a certain class (dog, cat, etc.), the basic design of a language model does not require data annotated by humans. All that's needed is a corpus of texts, and the model is trained to identify the most likely neighboring words from a set of input words, in particular those that follow.

Language modeling, even if it can be used for classification - such as feelings, for example - intrinsically enables the generation of new words following previous input data: these are generative AI models. Hence the name GPT: Generative Pre-trained Transformer.

2017 marked the arrival of a new type of deep artificial neural networks, known as Transformers. While the principle of modeling word co-occurrences remains the same, these models have achieved much better performance on text generation tasks (such as translation), thanks to two things:

  • the considered text window, which can be greatly extended: from 5 words in the first models, to around 4 pages with GPT-type models today.
  • the flexibility of the patterns searched for in the text, which are not only learned as before (e.g. we don't just look for noun-verb-complement relationships, as we used to pre-define by hand in linguistics, but any relationship that might be useful for finding neighboring words), but which may also depend on the neighboring words themselves (as if the eye and nose patterns learned above could instead be paw and snout patterns depending on the input image). In this way, the representation of a word can depend on its neighbors! You can see this here by exploring the position in the word representation space of the word banks, for example, which has two meanings.


Transformers are the deep artificial neural network models that underlie all the major language models today:

Large Language Models in 2023 (Fig. 2)

Since 2020, it has been shown that Transformer models are also very effective for image data. While image data generation methods also involve other learning approaches (such as diffusion models or adversarial networks), in text as in images, it is Transformer models that form the basis of today's most powerful systems for text and image generation.

For both text and images, these Transformer models are therefore pre-trained to find neighboring words in other words, or to re-construct hidden parts of images from neighboring pixels. To do this, they require very large image or text databases, but these don't necessarily have to be annotated by humans: they can just be grabbed from the Web. Similarly, when 2 models are trained together to predict text-image coupling, they can produce aligned text-image representations that are then used in Stable Diffusion, DALL-E, and other GPT4s.

These models, pre-trained with text or images not annotated1 by humans, produce text or image representations that can be adapted and applied to other specific problems: these are what we call "foundation models", the creation of which is a major leadership challenge today, and explained simply in this video (from IBM):

Foundation models (generalized LLMs): principle

1The texts accompanying the images were created by humans before the images were collected from the Internet, along with any descriptions.

Limitations of ML models and the issue of biases


Limitations of ML models and the issue of biases

The above explanations may have already alerted you to the fundamental limitations of these machine learning approaches, which must have an impact on the way we envisage their use and development, and how we question them as socio-technical systems.

Firstly, the fact that the success of these methods depends on vast quantities of data:

Then there's the fact that these methods, like any computational approach, require a simplification of reality in order to produce results, starting from a limited vision of the world that ignores all or part of the context: what do we decide to give as input to the algorithm, what do we define as possible outputs, how do we measure the error committed (by comparing it to what data, created by whom for what purpose), what kind of link between input and output does the AI method (like ML) allow us to find [The fallacy of AI functionality, Raji et a., article, video, Data and its (dis)content, Paullada et al., AI snake oil, Narayanan]

Finally, the fact that, to solve a certain task, the AI model identifies and uses patterns in the data, which link elements together (for example, the co-occurrence of words to generate text, or the co-appearance of visual elements to classify or generate images). The AI model will then exploit these patterns of association, reproducing them when it generates new data. Even if the data may reflect stereotypes held by humans, the automation of the reproduction of patterns contrary to the values of the society in which we wish to live should alert us and make us think: this is the problem of bias in AI, which is a fundamental limitation of machine learning approaches, which we document further below.

Social stereotypes reproduced by AI models

With seminal work by Joy Buolamwini and Timnit Gebru on AI systems for facial recognition [Gender Shades], the MIT Technology Review headlined back in 2017: Forget Killer Robots—Bias Is the Real AI Danger.

Word representations can also reflect unbalanced ("biased") associations between socially constructed categories (such as gender and race, age, sexual orientation) and certain attributes. This was demonstrated as early as 2016 for binary gender and occupational categories in particular, showing for example that the association between man and computer programmer is as strong as between woman and homemaker.

This work has since been generalized, and is more relevant than ever with today's generative AI models. We cite a few key resources below.

Firstly, these associations between semantic concepts have also been identified in human cognitive functioning. The strength of these automatic associations in memory can be measured by implicit association tests (IATs), introduced in 1998, also known as the Implicit Stereotypy Index. A concise explanation can be found here:
Cognitive biases

In 2017, Caliskan et al. showed that word representations learned by ML models from linguistic corpora contain human-like biases, by defining a "Word Embedding Association Test" and showing that it correlates with IAT scores for implicit stereotypy in Western populations. Similar results have been shown for recent language models, including intersectional biases.

The most recent large language models also encode subtle implicit biases, including but not limited to those underlying ChatGPT. One study shows in particular the problematic and discriminating potential of GPT4 in clinical diagnosis and medical training. Others show social stereotypes in the generation of journalistic content and even in the generation of computer code.

The same findings of biased associations between concepts have been shown in the large image databases used to train AI models, and learned representations of visual concepts have also been shown to encode human stereotypes.
 

Recently, on the data side, it has been shown that the massive datasets combining images and text used to train large image-generative AI models, contain a large fraction of hate content resistant to filtering (and this fraction increases with dataset size!).

On the model side, it has been shown that images generated by a generative AI model may on average more biased than images from the dataset used for training: with AI models, human biases can therefore be automated and amplified.

A need for ethical reflection by all?

We have just seen that the very way ML models work, by reproducing patterns of association present in the data, encodes stereotypes (which are also associations) whose perpetuation is deleterious for many social groups and contrary to shared values. The automation of this reproduction of stereotypes is also a danger to be considered (disempowerment of humans, loss of visibility and control, illusion of objectivity).

The perpetuation of stereotypes that discriminate against whole groups of the population must be carefully considered when deciding to use or deploy an AI system. This is one of the reasons why EFELIA Côte d'Azur wishes to contribute to everyone's understanding of the principles and limits of AI methods.

What kind of world do we want to live in, and how should we view our use of and involvement with AI tools in the light of this? These are crucial questions. They have been the subject of a declaration: The Montreal Declaration for the Responsible Development of Artificial Intelligence.

These necessary questions are being actively studied in the field of AI ethics, and we quote here from the report by our partner Université Laval:

Deliberative conditions are important to empower individuals and communities to make meaningful choices about technology, to move from being passive users or subjects of technology to being active agents who constructively shape patterns of technological development. Such an approach offers citizens the means of emancipation, training and empowerment, rather than making them the guinea pigs of technological experiments (Latour, 2001).

A few resources to help you move beyond the dominant discourse and take a critical look

A few resources to help you move beyond the dominant discourse and take a critical look

How can we envisage the nature of interactions with a conversational agent like ChatGPT? Can it be deployed in any interaction?

What is the environmental cost of making a request to an AI system?


Does an AI system have to be ethical?

Can AI systems be designed to tackle any problem?

  • A. Narayanan, “How to recognize AI snake oil,”, 2022. webpage, pdf
  • I. D. Raji, I. E. Kumar, A. Horowitz, and A. Selbst, “The Fallacy of AI Functionality,” in 2022 ACM Conference on Fairness, Accountability, and Transparency, Seoul Republic of Korea: ACM, Jun. 2022, pp. 959–972. doi: 10.1145/3531146.3533158, vidéo

And why do we speak of "AI systems" rather than "AIs"?
  • For the reasons mentioned here [AIMyths], we consider it inappropriate to speak of AI in a countable way, i.e. "an AI" or "AIs". We urge you to prefer "AI" to refer to the field generally, and "AI systems (AIS)", as in the Déclaration de Montreal and OBVIA from our partner U. Laval.


What are the myths created and conveyed by the dominant discourse on AI?


If we push the analysis of why AI systems reproduce social biases, wouldn't there be more delicate concepts to evoke but more explanatory of the real causes?

  • Catherine D'Ignazio, The Urgency of Moving from Bias to Power, 2023. Préface EDPL.
  • A. Birhane et al., The cost of scale thinking (pages 3-4): For instance, Science and Technology Studies (STS) scholars and critical data and AI studies have repeatedly emphasized that “scale thinking” stands in stark opposition to values such as societal equity and effective systemic change [26, 36]. In fact, unwavering commitment to scalability is instrumental to the realization of central objectives driving big technology corporations, such as profit maximization, market monopoly, and the centralization of power in a handful few, all too often at the expense of prioritization of informed consent, justice, and consideration for societal impacts of model.
  • M. Abdalla and M. Abdalla, Big Tobacco, Big Tech, and the Threat on Academic Integrity, 2021.


But if we are involved in AI research and teaching, how do you go about tackling the complex, multi-faceted issues involved? Here are a few points to consider:


 

Written by Lucile Sassatelli, March 2024.