GPT-4 is out, but it might not be the most interesting AI development of the week


OpenAI have stated that GPT-4 will be better than GPT-3 at producing realistic, targeted content. Since prompts designed to create content for the purposes of misleading or influencing people wouldn’t typically trigger the model’s refusal policy or API safety filters, GPT-4 represents a better tool than previous models to augment tasks such as social engineering and the creation and propagation of disinformation.

WithSecure Researcher Andy Patel says, “Given its longer context length, GPT-4 will be better at creating coherent long-form articles. GPT- 4 will also give rise to highly convincing chatbots. Its ability to take both text and images as input will enable it to operate in a vastly more convincing and persuasive manner in social media contexts. That said, older models such as GPT-3 are already good enough to be used for common social engineering, disinformation, and harassment purposes. Recently released lightweight LLMs, such as ALPACA, that can be run on cheap hardware and that rival GPT-3’s capabilities are of much more interest when considering the short-term future of AI-augmented crime and influence operations.”