Executive order on AI safeguards


The recently signed Executive Order focused on establishing new standards for AI use is one of the most comprehensive approaches to governing this new technology that we’ve seen to date.

As much as we may want to encourage organic and unconstrained innovation, it is imperative that some guardrails be established to ensure developers are mindful of any downstream effects, and that regulators are in place to help monitor for potential damages so they can be addressed before spiralling out of control.

As we have seen with recent AI developments, the technology is moving at a rapid pace and has already made an impact on society, with diverse applications across industries and regions. Whilst there have been calls for regulations to govern the development of AI technologies, most have been focused on preserving end user privacy and maintaining the accuracy and reliability of information coming from AI-based systems.

President Biden’s executive action is broad based and takes a long-term perspective, with considerations for security and privacy, as well as equity and civil rights, consumer protections, and labour market monitoring. The intention is valid – ensuring AI is developed and used responsibly. But the execution must be balanced to avoid regulatory paralysis. Efficient and nimble regulatory processes will be needed to truly realise the benefits of comprehensive AI governance.

I am optimistic that this wholistic approach to governing the use of AI will lead to not only safer and more secure systems but will favour those that have a more positive and sustainable impact on society as a whole.