What ChatGPT Could Mean for Cybersecurity & Data Privacy

776 Views

ChatGPT is a chatbot that uses a large language model trained by OpenAI. Generally, these models use bodies of text for the training, and there is human oversight and intervention to test the operation of the algorithm. When Amazon’s Alexa voice assistant was launched, it was quickly discovered that Amazon was employing humans to conduct sample checks of conversations between humans and Alexa to determine whether Alexa was responding as expected.

This could contravene GDPR in two ways. First, the bodies of text used to train the algorithm need to be available to be used in this way. Just because something is publicly available doesn’t mean this is the case. Clearview AI, for example, built its facial recognition database using images scraped from the internet, and has now been served enforcement notices by several data protection regulators preventing it from continuing to use images of citizens of their countries in this way.

Second, if ChatGPT users don’t know about all the potential uses of their personal data and haven’t provided consent, or been offered the opportunity to opt out where necessary, this could also be unlawful. It’s interesting to note that when law firm Fieldfisher asked ChatGPT if it complied with GDPR, it said “I cannot say for certain whether ChatGPT is GDPR compliant, as I am not aware of its specific design or implementation.

The media is currently full of articles by authors who have tested ChatGPT and found that it returns incomplete or inaccurate answers. The Accountability Principle means that ChatGPT would not be responsible if someone chose to rely on its advice, however there is clearly scope for problems.