Samsung how do you solve a problem like ChatGPT – ban it?

655 Views

Artificial intelligence (AI) is just another piece of software operating according to instructions that have been written by a human. And when any developer writes software, it is highly likely that it will contain bugs – some of which may manifest as security or privacy issues. As individuals and businesses continue to explore possible use cases for generative AI, it’s crucial they remember that publicly available models, like ChatGPT and Bard, are still in their infancy.

As with any other piece of software made available to users, it’s important to make every reasonable effort to identify and remediate security and privacy weaknesses – that’s why we are seeing greater regulation and governance of the software supply chain.

Test early, test often and fix any weaknesses – these are sensible maxims to apply to any software development process. There’s no doubt OpenAI will be taking steps to mature its approach in this area. In the meantime, businesses should carefully weigh up the risks of sending sensitive information outside of their organisations, particularly while experimenting with these publicly available tools. Sharing private or sensitive information into a system capable of learning and reusing it – even if that specific feature is not currently enabled – could cause big issues later down the line, were further and bigger data leaks to happen.