OpenAI to fight AI hallucinations

888 Views

OpenAI’s concerns around AI ‘hallucinations’ are not unfounded. After all, generative AI models – such as Chat GPT – are generalist, not specialist in their approach.  They have ingested vast amounts of data from across the entire internet, which means their systems have been exposed to all types of subject matter. But more importantly, the goal of generative AI lies in the name – to always provide a human-like or, rather, plausible-sounding output – I.e., an answer ‘at any cost’. This allows it to do incredible things if the user wants to write a poem or advertising jingle, but you can’t rely on it to always provide the correct answer.

In our era of fake news and attention-grabbing headlines, AI hallucinations are obviously a major cause for concern. Examples of so-called ‘hallucinations’ (plausible but fundamentally incorrect output) are becoming increasingly common, even in the legal sphere – with generative AI citing non-existent cases and precedent. This is why, instead of a source of fact, Chat GPT and other generalist models should be thought of as a well-read friend who can converse on a wide range of subjects, but is not an expert in them.

For reliable information based on fact, businesses should instead turn to specialist, analytical AI. This technology has been trained intensively over verified, specialist data and will be the true blueprint for an AI-enabled future.