The European Union, once again, has sought to lead the regulatory framework related to Artificial Intelligence (AI). It previously did so in 2018 by taking regulatory initiative in the area of data protection with Regulation (EU) 2016/679 of April 27, 2016, concerning the protection of natural persons with regard to the processing of personal data and the free movement of such data (GDPR), which has become a standard for regulating privacy at an international level.
Key initiatives in the digital field
The European Union has also been a pioneer in establishing policies applicable to digital platforms with the Digital Markets Act (DMA)
and the Digital Services Act (DSA), whose objective is to impose rules to balance the power between digital platforms and users to ensure a fairer and safer digital environment.
Likewise, recently, Regulation (EU) 2022/2554 on the digital operational resilience of the financial sector (DORA Regulation) was approved, aiming to establish uniform regulation so that these companies can respond to disruptions or threats involving information and communication technologies (ICT).
The dilemma of regulation and innovation
There seems to be a certain eagerness on the part of the European Union to regulate certain areas, justified on the basis of defending the values of the European Union, democracy, security, protection of fundamental rights (FR), and the Rule of Law.
However, this interventionist attitude causes a market effect tending to hinder innovation and the development of emerging technologies in Europe which, if not addressed in time, will produce irreversible effects on technological competitiveness levels in this region.
Comparative disadvantages in AI development
It is evident that, in terms of AI development and innovation, Europe is at a disadvantage compared to countries like China and the United States. Specifically, organizations in the European Union, regarding adoption, lag between 45% and 70% compared to analogous companies in the United States, which is attributed to various factors, such as regulatory barriers and lower private investment. [1]
Although the European Union has, compared to the United States, a greater number of professionals specialized in the field of AI, there seems to be a significant difficulty in retaining them working within our territory, which translates into a loss of talent and, consequently, a slowdown in technological development.
New regulations and their effects
On August 1, 2024, Regulation (EU) 2024/1689 of the European Parliament and of the Council, of June 13, 2024 came into force, establishing harmonized rules on artificial intelligence (AI Regulation), which will be applicable from August 2, 2026.
However, the enforcement of compliance with the obligations established in the AI Regulation will be gradual and phased, with some chapters and/or articles taking effect earlier.
Starting February 2, 2025, the prohibition on unacceptable risk systems (Chapters I and II) will begin to apply; from August 2, 2025, the provisions of Chapter III (Section 4), V, VII, and XII and Article 78 will apply; and finally, from August 2, 2027, Article 6.1 and the rest of the obligations established in the Regulation will be applicable.
Consequently, not all obligations established in the AI Regulation will be applicable until three years after its entry into force.
Nevertheless, it is crucial that companies and organizations immediately begin the process of adapting and implementing the obligations established in this regulation, to ensure legal compliance and avoid hurried and urgent adaptation due to the imminence of mandatory enforcement, as happened with the GDPR.
Regarding the objectives of the AI Regulation, the European Commission states that its purpose is to improve the effective application of existing legislation on fundamental rights and security, as well as to promote investment and innovation in AI within the European Union and the development of a single market for AI applications.
The Impact of Regulation on Competitiveness
Although it might be somewhat premature to assert that the approval of the AI Regulation is producing the opposite effect sought by the European Commission, which is to encourage investment and develop reliable and innovative AI systems that help increase productivity, what is evident is that the avalanche of regulations companies must comply with when operating in the European space negatively impacts the technological development of our region.
In fact, a conglomerate of European companies from relevant sectors signed an open letter to the European Parliament and Commission expressing their dissatisfaction with the AI Regulation, as they considered that the introduction of such strict rules “jeopardized Europe’s technological competitiveness and sovereignty.” [2]
Some tech giants like Apple Inc. (Apple) and Meta Platforms, Inc. (Meta) have recently decided not to make certain services or features available to their users in the European Union. In Apple’s case, it decided not to launch the main Apple Intelligence features this year “due to regulatory uncertainties caused by the DMA.” [3]
The same has happened with Meta, which has decided not to offer its new multimodal generative AI model called “Llama” in the European Union, due to “the unpredictable nature of the European regulatory environment.” [4]
In fact, last year, OpenAI’s CEO, Sam Altman, warned regarding the AI regulatory framework: “We will try to comply, but if we cannot, we will stop operating.” [5]
The above highlights that various operators in the sector, regardless of their size, view the approval of the AI Regulation as an obstacle, as well as the other regulations they must comply with when operating in Europe. Hence, the exodus of companies and professionals to countries with more flexible or less restrictive regulations.
It is important to consider that this is not the only factor causing a competitive technological imbalance between European Union countries and others like the United States, China, or the United Kingdom. Other factors such as public or private investment, financing, raw materials, among others, also influence.
In this regard, there is an undeniable gap in private investment being made in the United States compared to the European Union, which, without a doubt, will translate—unless adequate measures are established—into a competitive disadvantage and a slowdown in Europe’s economic growth.
The same happens when we talk about venture capital financing, which is practically nonexistent in our territory. Regarding raw materials, such as chip and semiconductor manufacturing for AI, the European Union largely depends on imports from countries like China.
A Holistic Approach for the Future
Consequently, to avoid the imbalance previously highlighted, the European Union must adopt a holistic and integrative approach that helps find efficient solutions, ensuring the protection of citizens without stifling the technological advancement necessary to maintain competitiveness in the global market.
It is undeniable that the approval of the AI Regulation has been a major step in the EU regulatory framework that, undoubtedly, like the GDPR, will serve as a model for regulating AI in other countries.
Although far from leading the technological race, the European Union has made a huge effort to position itself as a pioneer in regulating AI. However, this can have the drawback of erring in the implementation of regulations that, while aiming to protect citizens, at the same time restrict innovation and competitiveness in such a rapidly evolving sector.
If we compare the maturity level of the GDPR and the AI Regulation, we must consider that the GDPR was preceded by earlier data protection laws that evolved alongside technological developments. This is a differentiating factor, since the AI Regulation is the first legislation approved in this area.
Without prejudice to the above, overall, the assessment of the AI Regulation is quite positive, and the approach is close to accurate in most of its provisions. However, it has some shortcomings or aspects likely to cause practical problems that should be highlighted.
Legislative Technique Deficiencies
The AI Regulation has received numerous criticisms regarding the multitude of vague legal concepts that complicate understanding the regulation.
Therefore, to resolve the interpretation of some of the terms introduced in the regulation, we will need to rely on general principles of law grounded in the legal system or await later rulings by the AI supervisory agencies of Member States or expert professionals.
Length and Complexity
The regulation is long and complex in terms of its structure and content. Most companies and organizations find the obligations confusing and complain that complying with and implementing the measures established by the AI Regulation to mitigate risks constitutes a huge financial burden.
This mainly affects startups or SMEs, which face many difficulties in complying.
Moreover, the AI Regulation is not and will not be the only instrument regulating AI in the European Union, which adds complexity. In this regard, Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations, and administrative provisions of the Member States concerning liability for defective products (Product Liability Directive), recently amended, will apply.
Additionally, the European Commission has recently proposed the Directive on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive), which is still in the legislative process.
Possible Overlap Between Regulations
Another criticism of the AI Regulation concerns the actual need to create a new regulatory framework for AI. Many genuinely harmful AI applications could, in most cases, be prosecuted under national legal frameworks of Member States, for example, through data protection and privacy laws, intellectual and industrial property, labor law, rights to privacy, honor, and image, among others.
Definition of “AI Systems”
Although the AI concept is the core or starting point of this regulation, it was subject to intense debates during the preparation of the AI Regulation.
Article 3.1 of the Regulation defines it as “a machine-based system designed to operate with different levels of autonomy and that can display adaptive capacity after deployment, and which, for explicit or implicit objectives, infers from the input information it receives how to generate output results, such as predictions, content, recommendations, or decisions, which may influence physical or virtual environments.”
However, it is necessary to consider what is established in Recital 12 of the regulation, as it expands or nuances this definition. This recital anticipates that the concept must be harmonized with the work of international organizations. This provides some flexibility to the concept of “AI systems,” thus anticipating possible changes over time.
As can be seen, even the definition of AI itself has not reached a peaceful consensus among those involved in drafting the regulation.
Scope of Application
The European Union has opted for an extraterritorial scope so that it applies regardless of whether the provider and/or deployer is inside or outside the European Union or a third country, as well as regardless of their location when the output results of an AI system are used within European territory.
This has been done to protect European citizens from infringements of their rights by third countries that are “more flexible” or less protective of rights. This issue does not seem entirely appropriate; however, it is evident it will create transnational problems or challenges that must be addressed through international law.
Risk-Based Scope and Approach
The material scope of the AI Regulation is very broad. Indeed, many experts have expressed concern about the multitude of issues it covers.
Regarding the approach of the AI Regulation, the European Commission’s initial work focused on setting limits in the business sphere related to the impact of AI on fundamental rights. Later, when drafting the AI Regulation, the legislator mainly focused on risk/compliance assessment, revealing a techno-solutionist tendency in approaching the regulatory framework’s development.
This is a regulation market-oriented and mainly focused on mitigating risks that may be associated with the use, development, and commercialization of AI. In general terms, the risk-based approach seems appropriate and is not new, as the GDPR uses a similar approach (Art. 35 GDPR).
The AI Regulation imposes different requirements depending on the level of risk or categorization of the AI system.
Categorization of AI Systems
As mentioned, the regulation divides risks into four categories: minimal, limited, high, and unacceptable (or prohibited) risks.
Without a doubt, the most interesting are the high-risk AI systems (Articles 6 and following of the AI Regulation). Succinctly, for a system to be considered high-risk, it must meet the conditions set out in Art. 6.1 or be included in those listed in Annex III of the Regulation.
However, not all AI systems stipulated in that annex are automatically considered high-risk. Therefore, exceptions established in the regulation must be observed.
The regulation foresees the possibility of expanding the list established in Annex III, already anticipating the possibility that it may be expanded in the future.
Determining the categorization of the AI system will be fundamental for companies to understand the types of responsibilities they must comply with. However, since it is not always easy to determine, they will require expert professionals, both in the technical and legal fields.
Wide range of applicable sanctions
The regulation provides for quite substantial fines, and the amount will depend on the type of non-compliance. In general terms, the largest fines correspond to breaches of prohibited or unacceptable practices, with penalties that can reach up to 35 million euros or up to 7% of the company’s total annual global turnover, if that amount is higher than 35 million.
Regarding sanctions for non-compliance with the obligations of high-risk systems, fines may be up to 15 million euros or 3% of the company’s total annual global turnover if that amount exceeds 15 million.
Each Member State will be responsible for developing its sanctioning regime in national legislation, but the wide range available grants a high degree of discretion to Member States in this regard.
In short, as we have already mentioned, the assessment of the AI Regulation is positive and we consider that, in any case, given the magnitude of the inferences on the freedoms and rights of citizens, it was necessary.
The protectionist stance towards citizens adopted by the European Union reflects a commitment to the protection of fundamental rights, privacy, and the security of individuals. As we have indicated, if favorable and efficient measures are not implemented, this will lead to a slowdown in the technological progress of the European Union and, consequently, its competitiveness.
Continuous dialogue between authorities, expert professionals, companies, and citizens will be fundamental, as well as the implementation of measures to address the current imbalance between Europe and other countries such as the United States in this field.
[1] McKinsey Global Institute “Time to place our bets: Europe’s AI opportunity”
[2] WIRED “Business guild on the AI Law: ‘it endangers Europe’s technological sovereignty’”
[3] El Español “Apple will not launch the main Apple Intelligence features this year in Europe due to the DMA“
[4] El País “Meta will not offer its new generative AI models in Europe due to its ‘unpredictable regulatory environment’”
[5] DW “EU: tidal wave of laws scares AI companies away”
Link to the original content on elderecho.com
Article written by:
Lawyer specializing in intellectual and industrial property and new technologies
estefania.asensio@metricson.com
About Metricson
With offices in Barcelona, Madrid, Valencia, and Seville, and a significant international presence, Metricson is a pioneering firm in legal services for innovative and technological companies and specialists in protection, strategy design, as well as judicial and extrajudicial defense of intellectual and industrial property rights. Since its founding in 2009, it has advised more than 1,400 clients from 15 different countries, including startups, investors, large corporations, universities, institutions, and governments.
If you want to contact us, do not hesitate to write to us at contacto@metricson.com. We look forward to talking with you!