pixelcut-export (2)

AI EU Act – How will the new European law affect AI professionals?

PREDICTLAND AI

European AI law comes into force in 6 months. It carries liabilities, and obligations, for data scientists and application developers based on AI and machine learning, depending on the degree of risk they pose to users and citizens. And the fines can reach 7% of your turnover...

On March 13, lawmakers in the European Parliament overwhelmingly approved the AI Act, a regulation aimed at regulating AI according to a risk-based approach(official executive summary here).

It is an important move to regulate and manage the use of certain AI models in the EU, or affecting EU citizens, and contains some strict rules and serious consequences for non-compliance. If you are a Data Scientist or AI Engineer, you are interested to know some of the details that may affect your company and your developments.

Focus on irrigation for citizens and users

This law is essentially built around the concept of risk: risk to the health, safety and fundamental rights of EU citizens. Not just the risk of some kind of theoretical AI apocalypse, but the everyday risks of people’s real lives being made worse in some way, by the model you’re building or the product you’re selling. If you’re familiar with current debates about AI and ethics, aka responsible AI, this will sound familiar.

Implicit discrimination and infringement of people’s rights, as well as harm to people’s health and safety, are real challenges facing the current batch of AI products and companies. This law is the first EU effort to protect individuals.

Note, as “EU” as it may seem, there is a remarkable nuance in this law in terms of scope:

Any company that buys, develops, customizes or uses AI systems in its services, which could affect an EU citizen, will have to answer to the EU AI Act. This obviously affects manufacturers such as OpenAI, Google or Microsoft for their generative AI platforms whose users reach every corner of the planet. But it will also condition, in all likelihood, any organization based outside the EU, which might have relations or transactions with European citizens, for example, when it comes to conducting AI selection processes, or assessing the acceptance of a microcredit through scoring algorithms. This seems to me to be a matter of considerable weight.

Defining AI and its scope

Definitions of AI vary according to one’s taste. In the case of the EU, the law defines “AI” as follows:

A machine-based system designed to operate with varying levels of autonomy, that can exhibit adaptability after implementation and that, for explicit or implicit objectives, infers from the input it receives how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments.

What does this really mean? My interpretation is that machine learning models that produce responses that are used to influence the outside world (especially people’s physical or digital conditions) fit this definition. Even if they are not real-time, online and with automatic re-training, as, for example, models for:

  • decide on the risk levels of individuals, such as credit risk, risk of non-compliance with the law, etc.
  • determine what content is shown to people online in a feed or in advertisements
  • differentiate prices shown to different people for the same products
  • recommending treatment, care or services to individuals
  • recommending certain actions (or not) to users

Types of AI under the EU Act

The law differentiates between several types of applications. Some will be totally prohibited, and others will be subject to much more scrutiny and transparency requirements.

AI systems of unacceptable risk

These types of AI systems are simply not allowed. This part of the law will go into effect first, six months from now. It includes:

  • Behavioral manipulation or deceptive techniques to get people to do things they would not otherwise do
  • Targeting people because of things like age or disability to change their behavior and/or exploit them
  • Biometric categorization systems, to classify people according to highly sensitive traits (political, religious, philosophical beliefs, sexual orientation, race).
  • Assessments of personality characteristics involving social scoring or differential treatment
  • Facial recognition or “real-time biometric identification”, except for a select set of use cases (search for missing or abducted persons, imminent threat to life or safety/terrorism, or the prosecution of a specific crime).
  • Predictive policing (predicting that people will commit crimes in the future)
  • Extensive facial/biometric scanning or data scraping
  • Emotion inference systems in education or work without a medical or safety purpose (e.g., an application intended to determine whether one is “happy enough” to get a commercial job)

High-risk AI systems

This list, on the other hand, covers systems that are not prohibited, but will be highly scrutinized. There are specific rules and regulations that will cover all of these systems, which are described below.

  • AI in medical devices
  • AI in vehicles
  • AI in emotion recognition systems
  • AI in the police and surveillance and control systems

This excludes those specific use cases described above. Thus, emotion recognition systems could be allowed, but not in the workplace or in education. AI in medical devices and in vehicles are flagged as having serious risks or potential risks to health and safety, with good reason, and need to be monitored.

Other systems

The other two remaining categories are “Low Risk AI Systems” and “General Purpose AI Models”.

General purpose models are things like GPT4, or Claude, or Gemini – systems that have very broad use cases and can be embedded in other programs. So, GPT4 per se is not in a high-risk or prohibited category, but you won’t be able to use it in predictive policing systems.

Transparency and Scrutiny

If you develop a high-risk AI application, you will be responsible for the following:

  • Maintaining and ensuring the quality of the data you use in your model
  • Provide documentation and traceability: where did you get your data from, and can you prove it? Can you show your work in terms of any changes or edits that have been made?
  • Provide transparency: if the public is using your model (a chatbot, for example) or a model is part of your product, the user will need to be warned that they are not dealing with a human. This will apply to low-risk systems as well.
  • Maintain human oversight: Simply saying “the model says…” will not be enough. Humans will be responsible for what the model results say and, more importantly, how the results are used.
  • Ensure cybersecurity and robustness against cyberattacks, breaches and unintentional privacy violations. If your model fails due to code errors or is hacked due to vulnerabilities that you did not fix, it will be your responsibility.
  • Comply with impact assessments: If you are building a high-risk model, you need to conduct a rigorous assessment of what its impact might be (even if it is the intention of the product) on the health, safety and rights of users or the public.
  • For public entities, registration in an EU public database. Submission requirements apply to “authorities, government agencies, or other public bodies”.

Tests

In the case of a high-risk AI solution, you need to test it to make sure you are following the guidelines. Refer to Article 54b, which authorizes testing on individuals, once you have obtained their consent.

Entry into force of the EU Act

The law has a staggered implementation:

  • Within 6 months, the prohibitions on unacceptable risk AI will come into force.
  • Within 12 months, the laws for general purpose systems will apply.
  • In 24 months, all other rules will come into force.

Note: The law does not cover personal activities, as long as they do not fall into the prohibited categories. You can continue to develop your favorite open source app at home without risk!

Penalties

What happens if your company does not comply with the law, and an EU citizen is affected? These are the penalties:

  • If you engage in one of the prohibited forms of AI described above: fines of up to 35 million euros or, if you are a company, 7% of your global revenues for the last year (whichever is greater).
  • Other non-compliance: fines of up to €15 million or, if you are a company, 3% of your global revenue for the last year (the greater of 2).
  • Lying to the authorities about any of these considerations: fines of up to 7.5 million euros or, if you are a company, 1% of your global revenues for the last year (whichever is greater).

Note: for small and medium-sized companies, including startups, the fine will be the smaller of the numbers, not the larger.

What Should Data Scientists and Application Developers Do?

If you are building models and products using AI as defined in the Act, the first and most important thing is to familiarize yourself with it and its requirements.

Here is the official executive summary of the law and the AI Law Explorer for details.

Next, be on the lookout for possible non-compliance in your own business or organization. There is still time to find and fix problems, but prohibited forms of AI take effect first, and we’re talking 6 months.

In large companies, you will probably have a legal team, but don’t assume that they will be able to take care of everything, starting with a probable lack of knowledge and criteria. You will have to be part of the answer and get involved. You can use the Compliance Check tool on the EU IA Act website to help you.

Trends and Market IA

Featured news