AI Predictions For 2024

1,446 Views

AI Regulations

The recent OpenAI shakeup has all but proven that concerns about AI safety all but go out the window as soon as money and greed are concerned. Although conversations about AI safety should switch from talk about long termism and fictional existential future doom to near-term threats and potential human misuse of AI, they likely won’t.

The effective altruists and safety-ists will likely continue to dominate the discussion – their pro-regulation stance benefits corporations and those who are set to gain financially from all the doom mongering. For some reason, it’s easier for people to imagine a future that looks like Terminator than one that looks like Star Trek.

Open-source AI

Open-source AI will continue to improve and be taken into widespread use. These models herald a democratisation of AI, shifting power away from a few closed companies and into the hands of humankind. A great deal of research and innovation will happen in that space in 2024. And whilst I don’t expect adherents in either camp of the safety debate to switch sides, the number of high-profile open-source proponents will likely grow.

Disinformation

AI will be used to create disinformation and influence operations in the runup to the high-profile elections of 2024. This will include synthetic written, spoken, and potentially even image or video content. Disinformation is going to be incredibly effective now that social networks have scaled back or completely removed their moderation and verification efforts. Social media will become even more of a cesspool of AI and human-created garbage.

The cybercriminal ecosystem has become compartmentalised into a multitude of service offerings such as access brokers, malware writers, spam campaign services, and so on. On the disinformation front, there are many companies that pose as PR or marketing outfits, but who provide disinformation and influence operations as services. Where relevant, cyber criminals and other bad actors will turn to AI for the sake of efficiency. They will use generative models to create phishing content, social media content, deepfakes, and synthetic images and video. The creation of such content requires expertise in prompt engineering – knowing which inputs generate the most convincing outputs. Will prompt engineering become a service offering? Perhaps.

AI in business

AI-powered services and products will be rushed to market as competition amongst startups and established corporations continues to heat up. Not having AI functionality in your product will mean the difference between it being viable and useless. And that means little-to-no attention paid to security, just as we saw with the first IoT devices. If it’s smart, it’s vulnerable is about to take on a whole new meaning.”