Regulation of Artificial Intelligence (“AI”) – Current state of AI Regulation and Recommendations for companies 

There are few companies or institutions that do not apply AI, considering that spam filters, antivirus protection, and automated language translations are based on AI technologies. While AI applications streamline internal processes and save costs for companies, they also come with risks that European lawmakers aim to address uniformly. This blog post provides an overview of the content of AI regulation and formulates specific recommendations for companies to smoothly implement upcoming AI regulations. 

What will be the content of AI regulation? 

One of the most important aspects of ongoing discussions was the establishment of a common definition of the term “artificial intelligence” in view of strict regulation. Currently, this definition is quite extensive: 

“AI system” is software that has been developed using one or more of the techniques and concepts listed in Annex I and can produce results such as content, predictions, recommendations, or decisions that influence the environment with which it interacts, as determined by humans (cf. Article 3, No. 1, AI Regulation). 

In short, according to the current definition, an “artificial intelligence system” is a machine system that operates with varying degrees of autonomy and produces results such as predictions, recommendations, or decisions that can influence the physical or virtual environment. 

Similar to the General Data Protection Regulation (“GDPR”) that came into effect in 2018, the AI regulation will be based on the so-called risk-based approach, derived from the OECD principles listed below: 

  • Inclusive growth, sustainable development, and quality of life. 
  • Human-centered values and fairness,
  • Transparency and explainability,
  • Robustness and safety,
  • Accountability.

Accordingly, individual AI applications must be classified according to potential risk in order to comply with the requirements applicable to each risk class. The AI regulation distinguishes between four different categories: 

1. Prohibited AI systems 

These are AI systems with unacceptable risks, i.e., those that are considered a threat to humans. They include: 

  • Cognitive behavioral manipulation of humans or certain vulnerable groups, such as voice-controlled toys that promote dangerous behavior in children. 
  • Social scoring: Classification of people based on behavior, socioeconomic status, or personal characteristics. 
  • Real-time and remote biometric identification systems, such as facial recognition. 

2. AI systems with high risk  

These are AI systems that have a negative impact on safety or fundamental rights. They are generally classified into two categories: 

a) AI systems used in products covered by EU product safety regulations, such as  

  • medical devices,  
  • aviation-related systems, and  
  • toys.  

b) AI systems falling into eight specific areas that must be registered in an EU database: 

  • Biometric identification and categorization of natural persons. 
  • Administration and operation of critical infrastructure. 
  • General and vocational education. 
  • Employment, employee management, and access to self-employment. 
  • Access to and use of essential private and public services. 
  • Law enforcement. 
  • Administration of migration, asylum, and border controls. 
  • Support for legal interpretation and application. 

3. AI systems with low risk 

Such systems should only be subject to minimal transparency requirements. This allows the user to make an informed decision about whether they wish to interact with the AI system. Examples include AI systems that generate or manipulate image, audio, or video content, such as deepfakes. 

Obligations for AI systems for general use 

One important obligation applies to providers of so-called base models. These are systems capable of transferring the abilities and knowledge they have learned during a training process to another domain. Before these providers bring their AI to the EU market, they must identify and mitigate the associated risks to health, environment, and individual rights. 

The well-known generative AI systems, such as ChatGPT, which are based on such base models, are therefore obligated to ensure transparency and legality. For example, AI-generated content must be clearly marked as such, and it must be indicated which copyrighted works were used for training purposes. 

When will the AI regulation come into force? 

In June 2023, the European Parliament voted on and adopted the draft AI regulation with a majority. Now, the regulation will be negotiated within the framework of so-called trilogue negotiations between the Commission, the Council, and the European Parliament. Once these three parties have agreed on a final version, providers of AI will have two years to implement the requirements of the final AI regulation. 

It is already clear that AI providers should assess how they can meet the requirements of the AI regulation in order to develop a legally compliant product from the outset. 

Recommendations: 

1. Assess where AI is being used in your organization and create an overview of AI applications and their areas of use, or update an existing software inventory to include this feature. 

2. Determine if there is documentation related to this AI. For example, there may already be a data protection impact assessment that deals with the processing of personal data and addresses AI. In many cases, valuable information for documentation under the AI regulation can be derived from this. 

3. Classify the deployed AI applications according to the risk-based approach of the current draft. 

4. If a high-risk or even prohibited AI application were to be present, discuss with individuals from the IT department, legal department, and other relevant persons to what extent the AI application can be modified or what additional risk-mitigating measures can be taken. 

5. Check if the previously used AI applications are covered by your insurance coverage in case of liability. Especially with regard to the presumption of liability-causing causality envisaged in the draft AI Liability Directive, which aims to facilitate the enforcement of claims for damages to victims, the liability risk for companies could increase significantly. Taking out appropriate insurance to cover the financial uncertainties associated with AI applications should be in the business’s interest.