Controller or processor or…? – Self-discovery in data protection

The classification of the actors involved in data processing can lead to complicated demarcation issues in individual cases, even if the distribution of roles specified by the GDPR is straightforward. Thus, controllers and processors may be involved in the processing of personal data. Since the program of duties of these actors is structured differently and the scope of liability differs, it is essential for companies or organizations to determine which data protection role they assume. 

Liability risks in the event of missing or insufficient role allocation

The lack of or incorrect classification as a controller or processor becomes apparent at the latest when a data protection breach subject to notification is identified. In this case, the competent supervisory authority must be notified of the incident, and it must be stated who is the controller and would therefore be liable for a failure to notify under Art. 33 of the GDPR, which could lead to a damage claim under Art. 82 (2) sentence 1 of the GDPR. 

In contrast, the liability of the processor is limited to the violation of legal duties that they perform in their function. However, if the parties involved have not transparently documented the distribution of roles under data protection law up to this point, time-consuming conflicts arise over responsibilities for data processing, even though the prompt notification of a data breach – within 72 hours – must take place. 

If the deadline expires, there is the threat of a substantial fine, additionally this circumstance can be used by the supervisory authority as an opportunity to conduct further investigations and, if necessary, to impose further fines based on these investigations. To prevent possible investigations by the supervisory authority from the outset, each individual actor should document their position as (joint) controller or processor in a verifiable manner.

Controllers are decision makers 

In essence, a controller determines the “whether” and “how” of data processing, thus deciding on the purposes of dataprocessing and specifying the means by which these are to be achieved. 

If, for example, a company decides to commission a service provider to evaluate future personnel development using artificial intelligence (“AI”), it becomes the responsible party. In addition to the aforementioned notification obligation, the responsible party is subject to further obligations, including the following:

  • Information requirements according to Art. 13 and 14 DSGVO,
  • Implementation of data subject rights, e.g., right to access and deletion,
  • Preparation of a comprehensive processing list,
  • Conducting a data protection impact assessment, which is of considerable importance, especially for AI applications,
  • Conclusion of a processing contract with a processor and verification of the existence of appropriate technical and organizational measures.

Joint controllers share decision-making power

If several companies in a group decide to use the AI application together for reasons of cost reduction and, for example, establish a project group with equal representation to implement the AI, there is evidence of joint responsibility. This is because joint responsibility exists if the purposes and means of the data processing are determined through joint cooperation, so that each controller has a determining influence on the data processing. Joint controllers have a duty to comply with the

  • Conclusion of a joint controller agreement pursuant to Art. 26 GDPR.

In essence, the purpose of such an agreement is to ensure that the controllers divide up their duties among themselves in a transparent manner, in particular who will exercise the data subject rights and who will comply with the information obligations.

Processor assists the responsible person(s)

While the controller determines the purposes and means of data processing, the processor has no decision-making authority of their own. They are bound by the instructions of the controller in the processing and acts merely as the controller’s “extended arm”.

If the aforementioned companies decide to commission a service provider to implement the AI application, the service provider is to be classified as a processor. The catalog of duties of the processor is not as extensive as that of the controller and includes, among other things, the following

  • Creation of records of processing activities,
  • Notification to the controller upon becoming aware of a data breach,
  • Support of controller in the exercising of data subject rights,
  • Data processing only on the instructions of the controller and the other obligations under Art. 28 (3) GDPR.

Indication for an initial classification

If other actors are involved in the processing of personal data, the distribution of roles under data protection law should be examined more closely. The following indications are intended to provide an initial guide to self-assessment.

Conclusion

The importance of a correct or justifiable classification becomes apparent at the latest when irregularities occur in data processing and the pressure arises from the GDPR and between the actors to implement the obligations imposed on them in a data protection-compliant manner. It is therefore essential to discuss the allocation of roles from the outset, especially in the case of ambiguous circumstances, so as not to create additional time pressure by re-evaluating the allocation of roles in an emergency.

We will be glad to answer any questions you may have in this regard.

Regulation of Artificial Intelligence (“AI”) – Current state of AI Regulation and Recommendations for companies 

There are few companies or institutions that do not apply AI, considering that spam filters, antivirus protection, and automated language translations are based on AI technologies. While AI applications streamline internal processes and save costs for companies, they also come with risks that European lawmakers aim to address uniformly. This blog post provides an overview of the content of AI regulation and formulates specific recommendations for companies to smoothly implement upcoming AI regulations. 

What will be the content of AI regulation? 

One of the most important aspects of ongoing discussions was the establishment of a common definition of the term “artificial intelligence” in view of strict regulation. Currently, this definition is quite extensive: 

“AI system” is software that has been developed using one or more of the techniques and concepts listed in Annex I and can produce results such as content, predictions, recommendations, or decisions that influence the environment with which it interacts, as determined by humans (cf. Article 3, No. 1, AI Regulation). 

In short, according to the current definition, an “artificial intelligence system” is a machine system that operates with varying degrees of autonomy and produces results such as predictions, recommendations, or decisions that can influence the physical or virtual environment. 

Similar to the General Data Protection Regulation (“GDPR”) that came into effect in 2018, the AI regulation will be based on the so-called risk-based approach, derived from the OECD principles listed below: 

  • Inclusive growth, sustainable development, and quality of life. 
  • Human-centered values and fairness,
  • Transparency and explainability,
  • Robustness and safety,
  • Accountability.

Accordingly, individual AI applications must be classified according to potential risk in order to comply with the requirements applicable to each risk class. The AI regulation distinguishes between four different categories: 

1. Prohibited AI systems 

These are AI systems with unacceptable risks, i.e., those that are considered a threat to humans. They include: 

  • Cognitive behavioral manipulation of humans or certain vulnerable groups, such as voice-controlled toys that promote dangerous behavior in children. 
  • Social scoring: Classification of people based on behavior, socioeconomic status, or personal characteristics. 
  • Real-time and remote biometric identification systems, such as facial recognition. 

2. AI systems with high risk  

These are AI systems that have a negative impact on safety or fundamental rights. They are generally classified into two categories: 

a) AI systems used in products covered by EU product safety regulations, such as  

  • medical devices,  
  • aviation-related systems, and  
  • toys.  

b) AI systems falling into eight specific areas that must be registered in an EU database: 

  • Biometric identification and categorization of natural persons. 
  • Administration and operation of critical infrastructure. 
  • General and vocational education. 
  • Employment, employee management, and access to self-employment. 
  • Access to and use of essential private and public services. 
  • Law enforcement. 
  • Administration of migration, asylum, and border controls. 
  • Support for legal interpretation and application. 

3. AI systems with low risk 

Such systems should only be subject to minimal transparency requirements. This allows the user to make an informed decision about whether they wish to interact with the AI system. Examples include AI systems that generate or manipulate image, audio, or video content, such as deepfakes. 

Obligations for AI systems for general use 

One important obligation applies to providers of so-called base models. These are systems capable of transferring the abilities and knowledge they have learned during a training process to another domain. Before these providers bring their AI to the EU market, they must identify and mitigate the associated risks to health, environment, and individual rights. 

The well-known generative AI systems, such as ChatGPT, which are based on such base models, are therefore obligated to ensure transparency and legality. For example, AI-generated content must be clearly marked as such, and it must be indicated which copyrighted works were used for training purposes. 

When will the AI regulation come into force? 

In June 2023, the European Parliament voted on and adopted the draft AI regulation with a majority. Now, the regulation will be negotiated within the framework of so-called trilogue negotiations between the Commission, the Council, and the European Parliament. Once these three parties have agreed on a final version, providers of AI will have two years to implement the requirements of the final AI regulation. 

It is already clear that AI providers should assess how they can meet the requirements of the AI regulation in order to develop a legally compliant product from the outset. 

Recommendations: 

1. Assess where AI is being used in your organization and create an overview of AI applications and their areas of use, or update an existing software inventory to include this feature. 

2. Determine if there is documentation related to this AI. For example, there may already be a data protection impact assessment that deals with the processing of personal data and addresses AI. In many cases, valuable information for documentation under the AI regulation can be derived from this. 

3. Classify the deployed AI applications according to the risk-based approach of the current draft. 

4. If a high-risk or even prohibited AI application were to be present, discuss with individuals from the IT department, legal department, and other relevant persons to what extent the AI application can be modified or what additional risk-mitigating measures can be taken. 

5. Check if the previously used AI applications are covered by your insurance coverage in case of liability. Especially with regard to the presumption of liability-causing causality envisaged in the draft AI Liability Directive, which aims to facilitate the enforcement of claims for damages to victims, the liability risk for companies could increase significantly. Taking out appropriate insurance to cover the financial uncertainties associated with AI applications should be in the business’s interest.