Apple, Samsung and Amazon are just some of the companies reportedly banning the internal business use of generative artificial intelligence tools such as ChatGPT, Microsoft Bing Chat, Google BARD etc. So, what are the privacy and data protection concerns?
What is generative AI?
Generative artificial intelligence (AI) is a machine learning tool which is capable of generating output in response to prompts – the quality of the output very much depends on the dataset that has been used to train the tool. The tools are well-known for their human-like conversational skills, and creating content like text and code, images, audio, and video. Given the large datasets used to train generative AI tools, such datasets inevitably include personal data and special category personal data.
When a user uploads data to a chatbot platform, the AI may, depending on the terms, reuse that data in future. This is a concern for businesses as any confidential information uploaded to a generative AI platform (which could potentially include the personal data of employees or customers) may be processed and shared with a third party outside of that business. One example of this happening was in April of 2023, when the tech giant Samsung revealed there had been a leak of their confidential code by an engineer when they uploaded it to ChatGPT. This reportedly prompted Samsung to swiftly ban any further use of ChatGPT by its employees.
In March 2023, the Italian data protection regulator, The Garante, banned the use of ChatGPT in Italy (albeit temporarily). This was because of the privacy concerns around transparency to the users about how the information they provide might be used, as well as concerns around how the platform processed user data. These issues have since been resolved with The Garante, but they still highlight some of the areas that generative AI companies should consider.
As with any data controller, generative AI companies should ensure that there is no ambiguity as to how the personal data provided to them will be used.
Some generative AI algorithms have what is known as a “black box AI model”. In essence, this means the content that AI produces is not explained or justified. This creates accountability concerns as the user does not know what source the AI has relied upon and therefore, cannot verify the accuracy of the information.
Key considerations when using generative AI
Some AI tool providers state that they cannot delete specific prompts from users’ history, and although they generally attempt to anonymise information, they advise users not to share any sensitive information in their interactions with chatbot AI platforms.
When using AI platforms, you should remember to:
- avoid sharing sensitive or financial information;
- delete your chat history;
- conduct a data protection impact assessment prior to such use; and
- be mindful that whatever you share will be stored and used to develop the AI.
If you have any further enquiries regarding Data Protection and Generative AI please feel free to contact our Intellectual Property, Data Protection & Technology team and we would be happy to help.