In my career across both the public and private sectors, this is the most exciting time to be working with AI. The emergence of LLM-based AI technology has fundamentally changed the way we interact with our data and each other. That said, 2025 will undoubtedly be a year of tempered optimism in the AI space. We’re beginning to see the cracks in the ‘magic’ promised by large language models (LLMs); they are really good at some tasks, and seem to ‘dazzle’. However, these models cannot think, reason, or adapt to evolving threats in the ways many anticipated and hallucinate at higher rates than users may expect. This gap could very well lead to an ‘AI winter,’ where overblown expectations cool investor and developer enthusiasm.
It is crucial to remember that AI is not one monolithic entity but a set of tools applied in varied, case-by-case scenarios. Moving forward, investment and effort should be placed in better understanding and solving particular use cases — not in trying to build overarching, monolithic solutions to all problems (or AGI). Successful cybersecurity solutions in 2025 will hinge on a pragmatic, integrated approach to solving specific problems with tailored AI models rather than expecting a one-size-fits-all solution.