The EU’s AI Act – what do companies need to consider?

19. September 2024
LLP Law | Patent

LLP Law | Patent

AI-supported systems are increasingly permeating our everyday lives, and there are hardly any companies that are not at least considering in which areas, in which workflows and decision-making processes AI can be used sensibly. In addition to many advantages, such as making work easier, AI-controlled systems can also take on dystrophic proportions in some areas. An AI that makes suggestions based on books you have already bought or films you have watched or supports your company with software development may be considered helpful and have no decisive influence on your life or that of others. This is likely to be different with AI-supported social scoring systems that decide largely autonomously whether to grant a loan or award a job, not to mention the considerably detrimental consequences of AI-generated deepfakes.

So far, technological developments have outpaced political regulation. However, the European Union is now taking action and creating a new regulatory basis for the use of AI systems with the so-called “AI Act”. The aim: more safety and more trust in the new technologies as well as uniform rules on the European market.

The regulation affects both manufacturers of AI systems, dealers and importers of AI systems, as well as users of AI systems, as long as the use is not of a private nature. Violations of the regulations can result in penalties of up to 35 million euros or 7% of annual global turnover.

The following article will guide you through some of the basic regulatory content of the European Union’s AI Act, which was adopted on March 13, 2024.

Who is affected by the AI Act and what do companies have to consider from now on?

The regulation applies to AI systems as defined by the OECD, i.e. “machine-based systems” with the following characteristics:

  • designed for varying degrees of autonomous operation;
  • further adaptable after implementation;
  • inputs are used to generate results such as predictions, content, recommendations or decisions that can influence physical or virtual environments.

To put it very briefly, in practice this means software products and services that derive conclusions from inputs largely independently, which can have an impact on the outside world – for example as a basis for a user’s decision – and which are constantly evolving.

The regulation affects manufacturers of AI systems, dealers and importers of AI systems, as well as users or companies that want to use AI systems or services in their business environment. The company does not have to be based in the territory of an EU member state. It is primarily a matter of AI systems being used or developed in the territory of the European Union.

Classification of AI systems according to risk categories

The AI Act does not impose the same requirements on every AI system. The EU chooses a risk-based approach here: the higher the risk posed by the respective AI system, the stricter the safety and transparency requirements. Different regulations therefore apply depending on which category an AI system falls into. A distinction is made between the following levels. The assessment is the responsibility of the AI user.

  • unacceptable risk,
  • high risk,
  • limited risk and
  • minimal risk.

In addition, the AI Act contains its own regulations for general purpose AI models.

Let’s start with the extreme cases: The unacceptable risk and the minimal risk. The minimal risk applies, for example, to AI-controlled video games or automated spam filters, as they are unlikely to have a detrimental effect on users. The AI Act does not impose any requirements on such systems.

LLP Law|Patent

LLP Law|Patent

The situation is quite different for unacceptable risks: Their use is completely prohibited. These include, in particular, systems that go to the very essence of fundamental and human rights. For example, biometric categorization or remote identification systems. The former are systems that categorize people based on their ethnicity, sexual identity or political views. The latter are used in law enforcement, for example, to evaluate facial recognition databases or make predictions about the willingness of individuals to commit crimes. Social scoring systems or AI systems that are designed to manipulate the user fall into the same category.

Although high-risk AI systems are permitted, they are subject to strict requirements. Especially in the areas of transparency and control. Companies must introduce procedures to identify and mitigate the respective risks. At the same time, the AI Act requires clear documentation and unambiguous instructions for the use of AI systems. Users must also be able to easily understand that artificial intelligence is being used and how. Examples include the use of AI in recruitment processes or educational institutions, at border controls, in critical infrastructure, administration or the judiciary.

One system with limited risk is chatbots, for example, which are currently enjoying great popularity. The requirements here are lower. Above all, it is important to make it clear that an AI system is being used.

What now? Gradual implementation of the AI Act after it comes into force

The AI Act was adopted by the European Parliament on March 13, 2024 and the Council of the European Union gave its approval with minor amendments on May 21, 2024. The AI Act came into force with its publication in the Official Journal of the European Union on July 12, 2024. A two-year transition period is planned, during which the various regulations will come into force step by step. This gives affected companies the opportunity to implement the legal requirements in good time. Companies should use this time to familiarize themselves with the legal matter at an early stage in order to bring their AI application into line with the law that will then apply.

For example, the ban on AI systems with unacceptable risks will apply six months after the regulation comes into force. After twelve months, for example, regulations dealing with governance rules or GPAI (so-called “general purpose AI”) will apply. The so-called general requirements for high-risk systems become relevant after 24 months, and other, so-called special requirements, only after 36 months.

LLP Law|Patent

LLP Law|Patent

This means that providers of high-risk AI systems are not only obliged to introduce extensive organizational governance structures and corresponding documentation. It may also be necessary to make technical adjustments to the systems offered and to carry out and document a conformity assessment in accordance with the legal requirements. Companies that use or integrate third-party AI systems, services or functions as users should first document these by means of an inventory and then assess them accordingly in order to determine the obligations arising from the AI Act.

Quite apart from the new regulatory requirements of the AI Regulation, technology-driven companies need to be aware of the legal challenges arising from the use of AI systems and develop an overarching strategy to ensure their AI compliance. This requires, among other things, the assessment and safeguarding of risks relating to data protection, the protection of secrets and the protection of their own intellectual property, as well as the safeguarding of risks arising from the infringement of third-party rights through the production (e.g. relating to the use of third-party data for the training of AI models) and use of AI.

Do you have questions about individual cases and your AI systems? Please contact our attorneys at LLP Law|Patent for advice.

Sebastian Helmschrott | Rechtsanwalt (Lawyer), Certified Specialist for Information Technology Law, Department Head of IT-Law at BISG e.V.

Mr. Helmschrott is your competent LLP Law|Patent point of contact for contract design, especially for international companies in the field of semiconductors, as well as other modern technologies such as LED/OLED, embedded systems and software-supported processes. He is responsible for the national and international aspects of your IT procurement procedures, as well as IP law areas focusing on licensing and research cooperation.

Sebastian Helmschrott - LLP Law|Patent