On August 1, 2024, the Regulation (EU) 2024/1689, establishing harmonized rules on artificial intelligence (“AI Regulation”) came into effect, aiming to establish a legal framework to ensure that AI systems in the EU are safe, transparent, ethical, and respect fundamental rights.
Although this regulation has come into force, the application of some of its provisions will be gradual, so its mandatory nature will be implemented in phases.
Since February 2, 2024, we have entered what would be the first deployment phase of the AI Regulation: meaning that the prohibition of unacceptable risk AI systems is already in effect.
Consequently, from this date onwards, there is an absolute ban, within the EU territory, on marketing, making available, and using AI systems that constitute an unacceptable risk to fundamental rights and people’s safety.
It is essential to determine whether the AI system we want to market, introduce, or use in the market falls within the prohibited systems (Article 5) of the AI Regulation.
Prohibited AI Systems
Below, we will present the prohibited AI systems, what they consist of, and some examples taken from the European Commission’s Guidelines on prohibited artificial intelligence (AI) practices.
Covert Manipulation, art. 5.1 (a)
These are AI systems that deploy subliminal techniques that escape a person’s awareness, intentionally manipulative or deceptive techniques, with the aim of distorting behavior.
Use of subliminal visual or auditory messages, embedded images that are not consciously perceived but can be processed by the human brain.
Example: A chatbot impersonating a friend or family member with synthetic voice trying to simulate a person to cause fraud or significant harm.
Exploitation of Vulnerabilities, art. 5.1 (b)
These are AI systems that exploit vulnerabilities due to age, disability, or specific economic or social situation, with the aim of distorting behavior.
Example: A chatbot intended to provide mental health support to people with disabilities that exploits their capacities to influence them into buying expensive medical products.
Social Scoring, art. 5.1 (c)
These are AI systems that evaluate or classify people based on their social behavior or personal characteristics where the social scoring results in harmful treatment.
Example: China’s social credit system. Systems that assess citizens based on behavior, financial history, social interactions, etc. Resulting scores can significantly impact a person’s life, affecting their access to basic services, job opportunities, and more.
Criminal Prediction, art. 5.1 (d)
These are AI systems that evaluate or predict the risk of individuals committing a crime based solely on profiles or personality traits and characteristics.
Example: An AI system used by police authorities to predict criminal behavior in crimes such as terrorism based solely on age, nationality, address, type of car, and marital status.
Mass Biometric Recognition, art. 5.1 (e)
AI systems that create or expand facial recognition databases by non-selective extraction of facial images from the Internet or closed-circuit television (CCTV) recordings.
Example: Clearview AI, an AI system that has generated much controversy for using a vast database composed of billions of photos extracted from social networks and other public websites.
Emotion Recognition, art. 5.1 (f)
AI systems that infer emotions in the workplace or educational centers (except for medical or security reasons).
Example: Use of an emotion recognition AI system during the hiring process.
Biometric Categorization for Discriminatory Purposes, art. 5.1 (g)
AI systems that categorize people based on their biometric data to deduce or infer their race, political opinions, union affiliation, religious beliefs
(except those legally acquired).
Example: An AI system that classifies active social media users according to their supposed political orientation, analyzing biometric data from photos they uploaded to send them targeted political messages.
Remote “Real-Time” Biometric Identification in Public Spaces, art. 5.1 (g)
AI systems for remote real-time biometric identification in publicly accessible spaces for policing purposes, except for:
- Selective search for victims of kidnapping, trafficking, or sexual exploitation;
- Prevention of threats such as terrorism, search for suspects of specific crimes.
Example: The use of a real-time biometric identification system by police to identify a shoplifter and compare their facial images with a criminal database is prohibited, as it does not fall within any of the objectives listed in Article 5.1 (h), points i) to iii) of the Regulation.
Although the sanctioning regime established in the AI Regulation will not apply until August 2, 2025, if any of these prohibited AI systems are detected or we are harmed by them, we can report them to the Spanish Artificial Intelligence Supervisory Agency (AESIA), which is the body responsible for overseeing compliance with the regulation in Spain.
AESIA has stated it will examine such complaints, although it will not be until August 2, 2025, that it can impose the corresponding sanctions.
At Metricson, we specialize in legal advice and adaptation to the AI Regulation. We dedicate ourselves to conducting compliance audits, developing effective strategies, providing specialized training, and supervising AI projects, ensuring their alignment with current regulations.
Article written by:
Lawyer specializing in intellectual and industrial property and new technologies
estefania.asensio@metricson.com
About Metricson
With offices in Barcelona, Madrid, and Valencia and a significant international presence, Metricson is a pioneering firm in legal services for innovative and technology companies and a specialist in intellectual property. Since its inception in 2009, it has advised more than 1,400 clients from 15 different countries, including startups, investors, large corporations, universities, institutions, and governments.
If you want to contact us, do not hesitate to write to contacto@metricson.com. We look forward to talking with you!