Controller or processor or…? – Self-discovery in data protection

The classification of the actors involved in data processing can lead to complicated demarcation issues in individual cases, even if the distribution of roles specified by the GDPR is straightforward. Thus, controllers and processors may be involved in the processing of personal data. Since the program of duties of these actors is structured differently and the scope of liability differs, it is essential for companies or organizations to determine which data protection role they assume. 

Liability risks in the event of missing or insufficient role allocation

The lack of or incorrect classification as a controller or processor becomes apparent at the latest when a data protection breach subject to notification is identified. In this case, the competent supervisory authority must be notified of the incident, and it must be stated who is the controller and would therefore be liable for a failure to notify under Art. 33 of the GDPR, which could lead to a damage claim under Art. 82 (2) sentence 1 of the GDPR. 

In contrast, the liability of the processor is limited to the violation of legal duties that they perform in their function. However, if the parties involved have not transparently documented the distribution of roles under data protection law up to this point, time-consuming conflicts arise over responsibilities for data processing, even though the prompt notification of a data breach – within 72 hours – must take place. 

If the deadline expires, there is the threat of a substantial fine, additionally this circumstance can be used by the supervisory authority as an opportunity to conduct further investigations and, if necessary, to impose further fines based on these investigations. To prevent possible investigations by the supervisory authority from the outset, each individual actor should document their position as (joint) controller or processor in a verifiable manner.

Controllers are decision makers 

In essence, a controller determines the “whether” and “how” of data processing, thus deciding on the purposes of dataprocessing and specifying the means by which these are to be achieved. 

If, for example, a company decides to commission a service provider to evaluate future personnel development using artificial intelligence (“AI”), it becomes the responsible party. In addition to the aforementioned notification obligation, the responsible party is subject to further obligations, including the following:

  • Information requirements according to Art. 13 and 14 DSGVO,
  • Implementation of data subject rights, e.g., right to access and deletion,
  • Preparation of a comprehensive processing list,
  • Conducting a data protection impact assessment, which is of considerable importance, especially for AI applications,
  • Conclusion of a processing contract with a processor and verification of the existence of appropriate technical and organizational measures.

Joint controllers share decision-making power

If several companies in a group decide to use the AI application together for reasons of cost reduction and, for example, establish a project group with equal representation to implement the AI, there is evidence of joint responsibility. This is because joint responsibility exists if the purposes and means of the data processing are determined through joint cooperation, so that each controller has a determining influence on the data processing. Joint controllers have a duty to comply with the

  • Conclusion of a joint controller agreement pursuant to Art. 26 GDPR.

In essence, the purpose of such an agreement is to ensure that the controllers divide up their duties among themselves in a transparent manner, in particular who will exercise the data subject rights and who will comply with the information obligations.

Processor assists the responsible person(s)

While the controller determines the purposes and means of data processing, the processor has no decision-making authority of their own. They are bound by the instructions of the controller in the processing and acts merely as the controller’s “extended arm”.

If the aforementioned companies decide to commission a service provider to implement the AI application, the service provider is to be classified as a processor. The catalog of duties of the processor is not as extensive as that of the controller and includes, among other things, the following

  • Creation of records of processing activities,
  • Notification to the controller upon becoming aware of a data breach,
  • Support of controller in the exercising of data subject rights,
  • Data processing only on the instructions of the controller and the other obligations under Art. 28 (3) GDPR.

Indication for an initial classification

If other actors are involved in the processing of personal data, the distribution of roles under data protection law should be examined more closely. The following indications are intended to provide an initial guide to self-assessment.

Conclusion

The importance of a correct or justifiable classification becomes apparent at the latest when irregularities occur in data processing and the pressure arises from the GDPR and between the actors to implement the obligations imposed on them in a data protection-compliant manner. It is therefore essential to discuss the allocation of roles from the outset, especially in the case of ambiguous circumstances, so as not to create additional time pressure by re-evaluating the allocation of roles in an emergency.

We will be glad to answer any questions you may have in this regard.

About the author:


Competence of national competition authorities in GDPR matters

According to the case law of the ECJ, national competition authorities may also check for violations of the GDPR as part of their competition law review. 

Opinion of the Court

In this regard, the ECJ states in a press release on the judgment in case C-252/21 days:
“In its judgment delivered today, the Court states that, in the context of the examination of whether an undertaking is abusing a dominant position, it may prove necessary for the competition authority of the Member State concerned also to examine whether the conduct of that undertaking is compatible with provisions other than those of competition law, such as those of the GDPR.” 

Complementing this, the ECJ states with regard to the scope of the examination:  
“The examination as to whether the GDPR is complied with is carried out … exclusively in order to establish the abuse of a dominant position and to impose measures to remedy that abuse in accordance with the competition law provisions.” 

The ECJ thus expands the review competence of national competition authorities. However, this decision represents a positive decision for many companies on the second level, because the ECJ thus rejects the legal view, which is sometimes held, that the GDPR is a competition standard.  

Clear allocation of tasks for public authorities

Furthermore, the ECJ clearly outlined the competence of the national competition authorities in the decision, and in this respect strengthened the examination jurisdiction of the data protection supervisory authorities.  
“However, if the NCA finds a breach of the GDPR, it does not take the place of the supervisory authorities established by that Regulation.” 

With regard to the question of how to prevent antitrust authorities from assessing the facts differently than supervisory authorities, resulting in conflicting decisions, the ECJ also made a per-company finding:  
“In order to ensure a coherent application of the GDPR, NCAs are required to coordinate and cooperate loyally with the authorities supervising compliance with that regulation.” 

Conclusion

Overall, it can be stated that while the ECJ’s decision grants competition authorities a right of review with regard to potential GDPR violations on the one hand, the restrictions for companies mentioned in the judgment contribute to legal certainty on the other hand.

About the author:


Regulation of Artificial Intelligence (“AI”) – Current state of AI Regulation and Recommendations for companies 

There are few companies or institutions that do not apply AI, considering that spam filters, antivirus protection, and automated language translations are based on AI technologies. While AI applications streamline internal processes and save costs for companies, they also come with risks that European lawmakers aim to address uniformly. This blog post provides an overview of the content of AI regulation and formulates specific recommendations for companies to smoothly implement upcoming AI regulations. 

What will be the content of AI regulation? 

One of the most important aspects of ongoing discussions was the establishment of a common definition of the term “artificial intelligence” in view of strict regulation. Currently, this definition is quite extensive: 

“AI system” is software that has been developed using one or more of the techniques and concepts listed in Annex I and can produce results such as content, predictions, recommendations, or decisions that influence the environment with which it interacts, as determined by humans (cf. Article 3, No. 1, AI Regulation). 

In short, according to the current definition, an “artificial intelligence system” is a machine system that operates with varying degrees of autonomy and produces results such as predictions, recommendations, or decisions that can influence the physical or virtual environment. 

Similar to the General Data Protection Regulation (“GDPR”) that came into effect in 2018, the AI regulation will be based on the so-called risk-based approach, derived from the OECD principles listed below: 

  • Inclusive growth, sustainable development, and quality of life. 
  • Human-centered values and fairness,
  • Transparency and explainability,
  • Robustness and safety,
  • Accountability.

Accordingly, individual AI applications must be classified according to potential risk in order to comply with the requirements applicable to each risk class. The AI regulation distinguishes between four different categories: 

1. Prohibited AI systems 

These are AI systems with unacceptable risks, i.e., those that are considered a threat to humans. They include: 

  • Cognitive behavioral manipulation of humans or certain vulnerable groups, such as voice-controlled toys that promote dangerous behavior in children. 
  • Social scoring: Classification of people based on behavior, socioeconomic status, or personal characteristics. 
  • Real-time and remote biometric identification systems, such as facial recognition. 

2. AI systems with high risk  

These are AI systems that have a negative impact on safety or fundamental rights. They are generally classified into two categories: 

a) AI systems used in products covered by EU product safety regulations, such as  

  • medical devices,  
  • aviation-related systems, and  
  • toys.  

b) AI systems falling into eight specific areas that must be registered in an EU database: 

  • Biometric identification and categorization of natural persons. 
  • Administration and operation of critical infrastructure. 
  • General and vocational education. 
  • Employment, employee management, and access to self-employment. 
  • Access to and use of essential private and public services. 
  • Law enforcement. 
  • Administration of migration, asylum, and border controls. 
  • Support for legal interpretation and application. 

3. AI systems with low risk 

Such systems should only be subject to minimal transparency requirements. This allows the user to make an informed decision about whether they wish to interact with the AI system. Examples include AI systems that generate or manipulate image, audio, or video content, such as deepfakes. 

One important obligation applies to providers of so-called base models. These are systems capable of transferring the abilities and knowledge they have learned during a training process to another domain. Before these providers bring their AI to the EU market, they must identify and mitigate the associated risks to health, environment, and individual rights. 

The well-known generative AI systems, such as ChatGPT, which are based on such base models, are therefore obligated to ensure transparency and legality. For example, AI-generated content must be clearly marked as such, and it must be indicated which copyrighted works were used for training purposes. 

When will the AI regulation come into force? 

In June 2023, the European Parliament voted on and adopted the draft AI regulation with a majority. Now, the regulation will be negotiated within the framework of so-called trilogue negotiations between the Commission, the Council, and the European Parliament. Once these three parties have agreed on a final version, providers of AI will have two years to implement the requirements of the final AI regulation. 

It is already clear that AI providers should assess how they can meet the requirements of the AI regulation in order to develop a legally compliant product from the outset. 

Recommendations: 

1. Assess where AI is being used in your organization and create an overview of AI applications and their areas of use, or update an existing software inventory to include this feature. 

2. Determine if there is documentation related to this AI. For example, there may already be a data protection impact assessment that deals with the processing of personal data and addresses AI. In many cases, valuable information for documentation under the AI regulation can be derived from this. 

3. Classify the deployed AI applications according to the risk-based approach of the current draft. 

4. If a high-risk or even prohibited AI application were to be present, discuss with individuals from the IT department, legal department, and other relevant persons to what extent the AI application can be modified or what additional risk-mitigating measures can be taken. 

5. Check if the previously used AI applications are covered by your insurance coverage in case of liability. Especially with regard to the presumption of liability-causing causality envisaged in the draft AI Liability Directive, which aims to facilitate the enforcement of claims for damages to victims, the liability risk for companies could increase significantly. Taking out appropriate insurance to cover the financial uncertainties associated with AI applications should be in the business’s interest. 

About the author:


Balance: Operational IT <-> IT Security <-> Data Protection

Weight distribution – basic concepts

Crises like Covid-19 are forcing companies to rethink their processes and adapt their IT structure to these processes. Companies need to maintain a balance between the different functional areas within their organizations. The specialist areas discussed here involve providing technical support for the execution of business activities (operational activities – IT), protecting the personal data of customers and employees (data protection) and, at the same time, ensuring that the IT infrastructure required for this purpose is adequately protected on an individual basis (IT security).

Applied corporate practice often consists of maintaining operational business through the use of “operational” IT. In this context, the implementation of data protection is now regarded as a legal requirement and, although perceived as a nuisance, is mostly implemented through appropriate staffing. The constant loser in this process is the area of classic IT security.

Structures are often set up in such a way that the “IT” area takes care of all three areas with joint responsibility. Historically, legal requirements, such as data protection, were often pushed to the “IT specialists” by company management. From today’s perspective, such a blending of the different areas turns out to be unwise. To understand this, one must look at the objectives of the different areas.

IT operations -> execution of business operations IT Security -> Security of the IT infrastructure Data protection -> protection of personal information

The cornerstone is always the general use of IT-based systems to carry out the company’s purpose. The focus here is on achieving corporate goals. Here, solutions are sought and implemented that enable the company’s work and, if necessary, make it more efficient. In a hypothetical optimal environment, the operational units of the IT structure are free to develop and can support corporate processes according to current technical possibilities. This should be the focus of an active innovative IT department. In contrast to this, IT security and data protection are in some way opposed; for a simplified view, I will leave financial aspects out of the equation at this point.

The main task of IT security is to ensure that the systems operated in the operational business are not stopped or manipulated by external influences. In this context, it is important to be familiar with the current state of the art in the industry and to implement it accordingly. Legal requirements often play only a subordinate role here for the time being. Although there is an increasing number of legal standards today, these are mostly limited to certain industries (e.g. critical infrastructure, healthcare, etc.). In addition, there are regulations in the area of trade secret protection, but these allow for entrepreneurial leeway and are probably more relevant among the companies themselves.

Data protection, on the other hand, is today based almost exclusively on narrowly defined legal requirements, which must be complied with under penalty of mandatory sanctions. The sensitivity with which personal data is handled means that compliance is demanded not only by government controls, but also by the individuals concerned themselves. Often, violations of these requirements are “easy” for public authorities to trace, and sanctions can follow immediately and with considerable financial consequences.

The balancing act

Corporate management has the task of deciding, among other factors, on a risk-based basis how to deal with the weighting of the fields of work outlined here in schematic form. This often leads to the following reasoning and result:

Operational IT is provided with the minimum strength necessary to keep the required IT processes in operation for support. In addition, operational IT is entrusted with the task of testing the feasibility of new procedures and implementing them if necessary. Due to the increased risk assessment in the area of data protection as a result of the GDPR, regulations have been implemented in most cases to ensure that all new procedures are properly documented and that information can be provided quickly to those affected. Often, IT security is carried out in a staff unit with operational IT due to lower legal relevance, and complete ignorance of operational risk. – The picture is, of course, somewhat exaggerated and abbreviated.

The problems here present themselves quite obviously: An area of IT that is responsible for “keeping things running” is not suited to enforcing analyses regarding the security of individual applications and processes. The objectives of these areas are fundamentally contradictory. Moreover, the setup is not only questionable in terms of security, it also has the effect of slowing down innovation. When new projects are introduced, too often the thought is “we can’t do that because, …”. So IT departments often become “preventers” instead of “inventors.” Most technical solutions today can be implemented with the right approach and compromise. However, this requires a “ball game” between the IT, IT security and data protection players on the same technical level. By merging the areas, the respective “core competence” of the technically conclusive person responsible is often placed in the foreground.

Conclusion

In my opinion, this leads to a significant slowdown of innovative solutions in most companies. It is not the task of the specialist departments to make these decisions and considerations conclusively and to anticipate them for the decision-makers. A modern digital enterprise must have these decision-making competencies at the highest level. At the END, there is an entrepreneurial decision as to which weighting is selected. This can be very individual from application to application. However, it must always be made consciously and with knowledge of all circumstances. This is only possible by neutral independent reporting of the different departments.

The goal remains the achievement of entrepreneurial success while complying with the legal framework and taking calculated risks. – It is important to maintain a balance.

About the author:


Breach Counselor

IT Security Incident Crisis Management

In today’s world, data breaches are becoming increasingly common and can have devastating consequences for individuals and businesses. If you have been the victim of an IT security incident, a legal advisor (Breach Counselor) can help you deal with the aftermath and protect your interests.

What is a Breach Counselor?

A Breach Counselor is a professional who specializes in assisting and advising individuals and businesses affected by a data breach. His services can vary depending on the specific needs of the client, but he typically provides a range of services to help you manage the crisis, limit the damage, and protect your interests.

What is his job?

Perhaps the most important service a breach counselor provides is crisis management. An IT security incident can be a stressful and overwhelming experience. An IT security incident counselor can help you manage the crisis by providing emotional support, practical advice, and a calm, reassuring presence to help you cope. He can help you assess the severity of the security breach, identify the risks, and develop a plan to mitigate the damage and prevent further threats.

In addition to crisis management, a breach counselor can also provide valuable guidance on risk assessment. He can help you assess the risk of a data breach and advise you on the potential consequences of a security breach. He can also recommend actions you can take to prevent further data compromise and protect your interests.

Another important service provided by a breach counselor is communication. A data breach can be a sensitive and complex situation, and effective communication is essential to ensure that the right information is communicated to the right people at the right time. A breach counselor can help you communicate with customers, stakeholders and regulators to ensure everyone is informed and the message is delivered effectively. He or she can draft press releases, create conversation guides, and coordinate with affected parties to ensure the message is consistent and accurate.

About the author:


NIS-2 Directive: What companies need to consider to ensure cybersecurity

In today’s digital era, businesses are more dependent than ever on the benefits of modern technologies. But with this advancing digitalization also comes increased risks, particularly with regard to cyberattacks and data breaches. To ensure the security of information systems and strengthen the protection of data, the new NIS-2 directive has been introduced.

As a business, you should not underestimate the NIS-2 directive, as it sets out extensive obligations that you must comply with to ensure the protection of your IT infrastructure and cybersecurity. To help you meet the requirements of this new legislation, we have summarized the key points below.

The NIS-2 directive now applies to smaller companies than before. Companies with at least 50 employees, an annual turnover of 10 million euros or an annual balance sheet total of 10 million euros are covered by this directive. Violations of the directive are subject to severe sanctions similar to those of the General Data Protection Regulation (GDPR). The member states of the European Union have an obligation to transpose the NIS-2 Directive into national law by October 17, 2024. Although the national implementation law is not yet available, it is advisable that companies already deal with the extended obligations and possible sanctions of the new directive.

1. Which companies are affected by the NIS-2-Directive?

The Directive generally applies to companies that belong to either a “high criticality sector” as defined in Annex I of the Directive or an “other critical sector” as defined in Annex II. In addition, these companies must be classified as medium-sized, meaning they employ at least 50 people or have an annual turnover of at least 10 million euros or an annual balance sheet total of at least 10 million euros. Furthermore, they must provide their services within the European Union.

“High criticality sectors” include, for example, energy, transport, banking, financial market infrastructures, healthcare, drinking water, wastewater, digital infrastructure, ICT services management (B2B), public administration and space. “Other critical sectors” include, but are not limited to, postal and courier services, waste management, chemical production, manufacturing, and trade, food production, processing, and distribution, manufacturing/manufacturing (data processing equipment, mechanical engineering, motor vehicle manufacturing, other vehicle manufacturing), digital service providers (online marketplaces, online search engines, social networking service platform providers), and research.

2. Risk Management Measures

The NIS-2 Directive places great emphasis on an effective risk management culture within organizations. Major and important institutions are required to take appropriate technical, operational and organizational measures to ensure the security of their network and information systems. These include, for example, risk analyses, security concepts, backup and crisis management, access controls, and encryption concepts. It is important that these measures are state of the art and appropriate to the individual risks.

3. Cross-Threat Approach

Cyber threats can have different causes, so your risk management measures should take a cross-threat approach. This means that you need to protect not only against cyberattacks, but also against physical threats such as theft, fire, or unauthorized access to your information and data processing assets. Decisions about the risk management measures taken should be based on your organization’s exposure to risk and proportionate to the societal and economic impact of a security incident.

4. Security Incident Reporting Requirements

Under NIS-2, essential and major facilities are required to immediately report security incidents that have a significant impact on their services. These reports are made in a multi-step process that includes an early warning, a report of the incident itself, and a final report. In addition, you may be required to notify affected customers and users of significant security incidents that could impact the delivery of your services.

5. Governance and Accountability

The NIS-2 Directive places great emphasis on the responsibilities of corporate governance bodies. They must ensure that adequate resources are allocated to cybersecurity assurance and that there is a clear division of responsibilities within the organization. In addition, regular cybersecurity assessments should be conducted and adjustments made as necessary to keep pace with changing threats.


6. Enforcement and fines

The NIS-2 Directive endows the supervisory authority with broad powers, distinguishing between essential and important facilities. Authorities now have the power to conduct on-site inspections and to request certain information and data access. Essential facilities are subject to broader oversight powers, including non-emergency measures such as audits, regardless of risk assessment.

In enforcing the duties, authorities can take the same actions against operators of essential facilities as against operators of essential facilities. The authorities have various tools at their disposal, such as issuing binding instructions, setting deadlines, and imposing fines. In the case of essential facilities, the authorities can even order the temporary removal of management personnel.

In addition, there is the threat of severe fines in the event of violations. For operators of essential facilities, the maximum fine is either 10 million euros or 2 percent of annual worldwide turnover, whichever is higher. For operators of essential facilities, the maximum is either 7 million euros or 1.4 percent of annual worldwide turnover, whichever is greater.

Companies subject to the NIS-2 Directive facilities should always also comply with the provisions of the General Data Protection Regulation (GDPR) in the event of a significant security incident. It is possible that personal data will also be affected in the event of a significant security incident. Irrespective of a notification under the provisions of the NIS-2 Directive, the incident must also be reported to the data protection authority in accordance with Article 33 of the GDPR within a reasonable period of time.

In relation to the GDPR, there is a single overriding provision in the NIS-2 Directive. If the DPA imposes a fine under the GDPR, a fine under Article 35 (2) of the NIS-2 Directive is excluded for the same breach. However, other enforcement actions remain possible.

Contact us today to strengthen your cybersecurity together!

About the author: