EU AI Act (Artificial Intelligence Act) Summary

EU AI Act Summary

Software solutions that make decisions with the help of artificial intelligence (AI) are widespread nowadays, therefore the EU is planning an AI Act. Some examples of such systems are digital assistants, object recognition applications or recommenders.  

The added value of such systems arises from largely automated knowledge extraction from data whose complexity and quantity cannot be handled by humans alone. A decisive factor for acceptance and trust in AI-based applications is that the AI's decisions are fair and comprehensible. Relevant issues at this point are, for example, a careful analysis of the data regarding bias and analyses of the interpretability of the model. Interpretability in this context means understanding why certain predictions are made by a model (https://www.steadforce.com/how-tos/explainable-object-detection, https://www.steadforce.com/blog/explainable-ai-with-lime).

Due to the increasingly important dissemination of AI, legislators have now also entered the discourse: In order to strengthen Europe's competitive potential in artificial intelligence research and application, the European Union (EU) has drawn up an ambitious plan to promote technical excellence in research on the one hand and to further strengthen trust in AI solutions on the other. The goal of this strategy is to make Europe the world leader in trustworthy and innovative AI. A core component of this approach is the so-called Artificial Intelligence Act (AI Act).

What are the EU's goals with the AI Act (Artificial Intelligence Act)?

In April 2021, the first draft of an AI regulation, the EU AI Act, was presented to become the world's first law regulating AI. The draft aims to create binding rules to make Europe the world leader in trustworthy, secure and innovative AI.

Specifically, this means that similarly to the General Data Protection Regulation (GDPR), an EU-wide, harmonized set of regulations is to be created. Just as with the GDPR, providers who want to offer their AI solutions within Europe are bound by this set of regulations, regardless of where the company is headquartered. Accordingly, this extends the scope of the AI Act beyond the European Domestic Market.

The Artificial Intelligence Act (AI Act) will thus influence the global development of AI technology.

Which AI applications will be regulated?

The April 2021 draft provides for a wide-ranging and in part unclear definition of AI that includes not only machine learning methods but also, for example, merely statistical methods. Accordingly, a considerable proportion of currently productive software would fall within the scope of the AI Act. It is to be hoped that a clearer definition will provide more security for the companies concerned.

Furthermore, AI systems included in the above-mentioned definition are divided into different risk levels. The requirements should thus depend on the risk potential of the AI application. This should provide for a moderate regulation in order to create innovation-friendly conditions.

As a result, a large proportion of AI applications are not to be covered at all; the draft specifically mentions video games, search algorithms and spam filters, for example. Furthermore, only a transparency obligation is to apply to low-risk systems, e.g. chatbots or so-called deep fakes. This means that users need only be aware that they are interacting with an AI.

So-called high-risk AI systems will be regulated to a much greater extent. These include, for example, applications for operating critical infrastructure, for regulating access to education and training or essential private and public services, and those that work with biometric data. In addition, systems supporting law enforcement, border control or the judiciary are also included. In general, applications that may interfere with health and safety or the fundamental rights of people are mentioned here.

Here the draft AI Act relies on comprehensive obligations for providers and users. These include, among other things, conformity assessment, human oversight, reporting of serious incidents in operation, and the creation of transparency and interpretability of results for users. In addition, if high-risk AI applications are trained with data, there will also be stringent requirements for the quality and governance of that data. This concerns the establishment of appropriate data governance procedures for data collection, for data preparation operations such as labeling, but also a prior assessment of the availability, quantity and suitability of the required datasets and an investigation regarding possible bias. In particular, the datasets must be relevant, representative, error-free, and complete. A striking aspect here is the vague wording of the draft, which is likely to cause great uncertainty in practice.

Lastly, applications with unacceptable risk, which e.g. manipulate the behavior of its users, use biometric monitoring or "social scoring", are to be completely banned. The only exception here is for military applications.

The draft AI Act is currently being revised and will be finalized in the near future. As expected, the definition of AI, as well as the detailed classification of applications into the different risk groups, is currently still under discussion. It is to be hoped that a clearer, innovation-friendly solution can be found at this point.

How can companies prepare for the AI Act?

In view of the substantial fines of up to 30 million euros or six percent of global annual turnover in case of infringements, it is recommended to take action quickly. Regardless of the exact definition of AI as well as the corresponding risk assessment, compliance with the new AI Act can only be ensured if there is a complete overview of the AI tools used in the entire system landscape.

In particular, the introduction of enterprise-wide data governance, a framework for the efficient utilization of data, is a comprehensive and complex process. Defining uniform, standardized processes for handling data and the corresponding responsibilities and security mechanisms can affect a large part of the existing IT system landscape. Here, a competent and reliable partner can be a great help.

What's next?

The draft AI Act still has to be revised and finally passed. It is expected that the two-year implementation period will still begin in the course of 2023. We will continue to keep you up to date through this blog. In light of the complexity of the issue, it is important not to lose any time and to quickly prepare for implementation.

You want to see more?

Featured posts

No spam, we promise
Get great insights from our expert team.