With Gov. Newsom’s veto of California’s SB 1047, enterprise leaders should seize this opportunity to proactively address AI risks and protect their AI initiatives now. Rather than wait for regulation to dictate safety measures, organisations should enact robust AI governance practices across the entire AI lifecycle: establishing controls over access to data, infrastructure and models; rigorous model testing and validation, and ensuring output auditability and reproducibility. By embedding these governance practices into AI workflows from the start, companies can protect against misuse, mitigate risks, and show accountability, putting them ahead of potential future regulations (and competitors). Together, these actions prepare enterprises for inevitable compliance, build trust with stakeholders, and foster a culture of responsible AI adoption that drives business impact.
Enterprises should also actively advocate for one federal regulatory approach that addresses real-world AI threats without stifling innovation. SB 1047’s failure highlights the dangers of fragmented, state-level regulations that often fail to address current risks and create a costly compliance landscape. Companies should use this regulatory pause to engage with policymakers and promote a unified, adaptable federal framework that evolves with AI technology. By supporting regulations focusing on actual threats—such as fraud, misinformation, and misuse by bad actors—AI leaders can help shape a regulatory environment that balances innovation with safety. In doing so, they can avoid the pitfalls of reactive compliance, instead contributing to building new standards to protect society while allowing the U.S. to maintain its AI leadership.