The Information Commissioner’s Office (ICO) has recently blogged about the preparation of Data Protection Impact Assessments (DPIA) in the development of AI systems/software. Before exploring their advice, we’ve set out the basics. 

What is a DPIA? 

DPIAs are privacy impact assessments’ (PIA) first cousin – PIA was the term used under the old data protection legislation. Like PIAs, DPIAs are an assessment of the data protection risks associated with a processing activity. They form part of your accountability obligations under GDPR. 

A DPIA should: 

  • describe the processing you are undertaking and the intended/desired outcomes of your processing;
  • assess the nature and sensitivity of the data being processed; and
  • consider the nature of your relationship with the data subject.

This information will allow you to assess the necessity and proportionality of the processing activity. Wherever risks to the data subject’s rights are identified, the DPIA should set out steps that will be (or are being) taken to mitigate that risk. 

There is no definitive DPIA template you should follow, however the ICO has additional guidance on this and if in doubt, you should seek professional advice. This is something Stephens Scown can help you with. 

When should you undertake a DPIA?

An effective DPIA should assist you in minimising the data protection risks in your project, which means that it should be undertaken as early as possible in the development stages. It also means that your DPIA should be should be treated as a “live” document and reviewed regularly as your project and processing activities evolve. 

Undertaking a DPIA early on in the project planning and development means that data protection will be woven into the fabric of the project, limiting the risk of a negative DPIA further down the line. Integrating robust data governance principles may also enhance how trustworthy your AI is perceived as being.  

Why should a DPIA be undertaken?

As mentioned above, DPIAs form part of your accountability obligations under GDPR and their effective use is evidence that you are taking your compliance with data protection obligations seriously. 

You are obligated to conduct a DPIA for all types of data processing which is likely to result in a high risk to individual’s interests. Significant fines are available to the ICO of up to €10 million or 2% of global turnover (if higher) for failure to carry out a DPIA when required.   

Why is this relevant to AI? 

Article 35(3) of GDPR sets out three instances of processing that will always require a DPIA, and they are closely linked to AI. 

The three instances are: 

  1. Systematic and extensive evaluation/profiling of individuals with significant effects on those individuals; 
  2. Large scale processing of sensitive data; and
  3. Monitoring of a publicly accessible area on a large scale.

The ICO gives examples of other “likely high risk” factors, such as automated decision-making, matching or combining datasets, evaluation and scoring and, more relevant here, innovative use or application of new technological or organisational solutions. 

Taken in the round, this means that whenever you are developing or deploying AI methods in the processing of personal data, a DPIA is likely to be necessary. 

So what does the ICO recommend? 

In its blog on AI auditing framework, the ICO highlights the following as key components to a DPIA in the context of artificial intelligence:

  1. A systematic description of processing; 
  2. Assessing necessity and proportionality of the processing;
  3. Identifying risks to rights and freedoms of data subjects;
  4. Identifying measures to address the risks; and
  5. Maintaining the document.

As you would expect, the ICO’s blog emphasises the human element of data processing. One of the key takeaways is that public perception of the data processing is an important factor to consider – would data subjects reasonably expect the processing to be undertaken by an AI system? 

The blog also invites us to consider how the AI system interacts with human decision-makers and what freedom those individuals have to disagree with/override the AI system.

Finally, the “risks” may not be those that you anticipate. We are increasingly familiar with the social risks of biased data, but in your DPIA you may be required to consider how any such bias may affect the data subjects’ rights under equalities and anti-discriminatory legislation. What about other human rights, such as freedom of speech? If these rights are limited, is the limitation of these rights proportionate? It is important to consider the legal framework beyond data protection. All of the issues should be documented in your DPIA.