11 Feb Legal aspects of artificial intelligence
Introduction, overview and scope of application
Artificial intelligence has become part of our professional and private lives.
Although very useful, AI presents a number of risks which have obliged the member states of the European Union to regulate it legally.
→ A recent example published in Les Echos in December 2024: Scammers reproduced a real videoconference meeting by imitating the voice and presence of the participants in order to ask the group’s CFO to make settlements to the tune of $25 million. This was a fake meeting created from scratch by AI.
This regulation, also known as the AI ACT of June 13, 2024, represents the first text in the world to create a binding framework for AI.
The AI ACT is primarily aimed at AI suppliers and developers, i.e. companies that create and market products and services using AI.
Article 3-1 of the AI ACT defines AI as “a system of prediction or recommendation that can influence physical or virtual environments”.
These Regulations have been in force since August 1, 2024 for certain cases, and will cover all systems according to their level of risk in August 2026.
The territorial scope of AIS (Artificial Intelligence Services) is, of course, the territory of the European Union for suppliers established in the Union or in a third country, as long as they allow AIS to be used in EU territory.
With regard to the material scope of application, the Regulations are based on the use made of the systems and not on the technologies used by the user.
Excluded from the scope of the Regulation are AIS developed exclusively for military purposes, or under international agreements respecting fundamental rights and individual freedoms.
Nor does it affect Member States’ competences in the field of internal security (article 2-3).
Nor, subject to certain specific systems, does it apply to scientific research and development services or their results.
Risk levels and deadlines.
The European Regulation defines four levels of risk for AIS: unacceptable, high, limited or minimal, corresponding to a scale of obligations ranging from the prohibition of certain practices to simple transparency requirements.
1°) Systems presenting unacceptable risks, in particular a threat to security, or a potential infringement of personal rights (article 5 of the AI ACT).
These systems will be subject to inspection within six months of the Act coming into force.
They must be removed or brought into compliance by February 1, 2025.
2°) High-risk systems (article 6) such as those intended for use as a safety component in a product, or where the component is subject to a pre-market conformity assessment with a significant risk of harm to the health, safety or fundamental rights of individuals.
Compliance obligations are set out in articles 8 to 15 of the AI ACT, ranging from the implementation of a risk management system, the application of appropriate governance, the requirement for transparency, to robustness and cybersecurity.
Documentation is required, including CE marking.
The deadline for putting into service is 24 months from publication of the European Regulation, except for products already covered by European legislation, which will have 36 months to comply with the requirements of the AI ACT, such as medical devices, in-vitro diagnostics, toys, radio equipment, civil aviation safety or agricultural vehicles.
3°) Systems presenting limited risks, such as spam filters or chatGPT robots, will have to comply with transparency requirements, so that users know they are not interacting with humans (article 50).
4°) Minimal-risk systems, such as games and spam filters, can be used freely.
Generally speaking, the entry into force of the rules applicable to general-purpose AI models and the appointment of the competent authorities is August 2, 2025.
From August 2, 2026: Application of all provisions and implementation of at least one regulatory sandbox by EU member states.
From August 2, 2027: Entry into force of the rules applicable to Annex I high-risk AIs.
Regulatory sandboxes.
They are provided for in articles 57 to 61 of the AI ACT.
These are fixed-term support schemes that enable suppliers to work with the relevant authorities to develop, train, test and validate AISs, with the certainty that the end product will comply with current regulations.
This experimental framework enables companies to apply for exemption from all or part of the obligations linked to the use of number frequencies or to network operator status, for a maximum period of two years, under the supervision of ARCEP.
Prohibitions and penalties.
Article 5 of the Regulation prohibits AIS from using subliminal techniques to influence users without their knowledge.
It also prohibits the use of real-time biometric identification techniques, subject to exceptions, notably to prevent the risk of infringement of users’ privacy, with the approval of a judicial authority or an independent administrative authority.
As for personal data, the AI ACT includes numerous references to Regulation 2016/679 RGPD.
The CNIL itself has created an AI and RGPD model department on its site https://www.cnil.fr/modeles-dia-et-rgpd
Penalties are set by the member states themselves, in line with the criteria set out in the European Regulation.
Failure to comply with Article 5 may result in administrative fines of up to € 35 million, or if the offender is a company, up to 7% of its total worldwide annual sales for the previous financial year.
Administrative fines are determined by the Member States.
Finally, once on the EU market, AIS are not free.
Chapter IX of the European Regulation contains numerous provisions designed to control and regulate them.
These include post-marketing surveillance, information sharing and market monitoring.
Thierry Clerc
12/30/2024