US agencies report on deepfakes

545 Views

The recommendations set out by CISA indicate a promising step forward in enhancing organisations’ resilience against the negative use of AI. We’ve seen already that it can be extremely difficult to spot deepfakes, and humans are starting to fall for deepfake scams. The need to address the situation is more urgent than ever. Deepfakes can affect every aspect of our society – from the integrity of elections and trust in politicians to financial fraud and illegitimate access. With this vulnerability, organisations need to harness technology as their main weapon in fighting adversaries utilising deepfakes.

There are currently several companies developing cutting-edge deepfake detection tools. Tools such as biometrics leverage AI-trained algorithms to evaluate and ascertain the authenticity and liveliness of voices and faces present for access and authentication purposes. This approach can significantly enhance organisations’ verification methods and overall safeguard assets from theft.

However, there is a performance gap in deepfake detection algorithms. For organisations looking to implement such solutions, it’s important they have been properly assessed and certified by third-party evaluators.