Woman face outline made up of circuit board and binary data flow to show AI in the workplace.

Artificial intelligence (AI) is a hot topic for many businesses at the moment. With more AI driven tools on the market, the question of risk and regulation inevitably arises for any business owner looking to adopt such tools.

AI tools can, by their nature, process many different types of data, including personal data and when any new technology seeks to process personal data, both a vendor and adopter must consider how privacy and data protection interacts with that technology. It is important to note that risk and regulation can arise in absence of personal data, but we shall focus on privacy aspects today.

Although AI and privacy are two separate things, there is plenty of interplay between the two. A good analogy is to think of AI as the physical car (the technology), while privacy is the driving licence, the requirement for an airbag, and other health and safety factors (protective rules and outcomes). The two must work together.

The good thing is that while many tools appear to be new, the existing data protection framework in the UK gives a great starting point for adopters. Our personal data governance requirements already require transparency, risk management, and harm reduction, alongside giving individuals personal rights and freedoms in respect of their data. Privacy professionals will already be well-versed in conducting balancing exercises and the same will be necessary to assess AI. Privacy professionals will also understand the ethical questions arising from such technology (i.e just because something can be done, doesn’t mean that it should be done).

Of course, the legal landscape of AI is changing globally and varying levels of importance and priority. It is recognised, especially by the UK, as an ‘economy -booster’ and so AI will continue to be developed and pushed. This also means that the law is likely to evolve. In short, adopters must know that their data and technology governance exercises are likely to require a degree of agility.

Here are our top tips for anyone looking to adopt an AI tool:

Early due diligence

As should be the case for any new technology adopted by a business, due diligence should be conducted at an early stage and in a thorough manner to understand any resulting impact to individuals. One such example may be when an AI tool requires extended retention of data to enable ‘learning’, this may impact data minimisation principles. When done properly, a DPIA is a great tool for assessing risks to personal data arising out of the adoption of new technology. Businesses can add to this DPIA by incorporating AI questions. A good starting point would be the AI risk toolkit published by the Information Commissioner.

AI taskforce

A common problem is that businesses allocate the adoption of new technology to one department in isolation, such as an IT department. This means that individuals will likely look at a technology from one perspective. A taskforce can help call upon the knowledge necessary to appropriately balance risk. For example, in the EU, recruitment AI tools are likely to be categorised as ‘high risk’ requiring additional protective measures. When adopting such a tool, an early-stage taskforce containing stakeholders from HR, legal, IT and any DPO will better serve with risk identification and mitigation. 

Human intervention

Adopters need to consider what the AI will be doing on their behalf and ensure that any ‘human intervention and escalation’ standards are met (for example, requirements for automated decision-making under UK GDPR). It is also important to understand that human intervention must be effective and demonstrate ‘proper’ scrutiny – in some cases, a lone ‘reviewer’ may not be considered effective.

Review documents

If you implement an AI tool, this will without doubt impact your existing contracts, notices and policies. For example, a law firm adopts a chat bot which appears to be generating output which could be perceived to be legal advice. Contract terms and relevant disclaimer notices would offer an adopter protections, while policies maintain the transparency necessary to help the user understand what happens to their data once it enters the chat bot system.

To summarise, as the AI marketplace continues to expand and become more accessible to businesses of all sizes, it is important for businesses to understand how to effectively navigate procurement of those tools. Fortunately, this isn’t entirely uncharted territory, and its likely that you will already have access to individuals with the skills necessary to establish and maintain proper AI governance. As always, our technology team is always here to give those teams a helping hand and can be contacted on 0345 450 5558 or enquiries@stephens-scown.co.uk