Summary Bullets:
• The recent rash of synthetic AI-driven deepfakes have business and government entities alarmed about potential security implications.
• The US National Security Agency (NSA) has issued guidance on how to detect images designed for malevolent purposes and what steps they can take to minimize damage from deepfakes.
Advances in generative artificial intelligence (AI) are offering promising use cases. But the technology can also be applied for more nefarious purposes. Cyber criminals can harness synthetic AI to spread misinformation, cause harm to organizations, and potentially to profit from their attacks. In one of the most recent high-profile cases, the actor Tom Hanks criticized a dental plan advertisement that used an AI-generated image of him without his consent. In a separate case, the “CBS Mornings” host Gayle King said a deepfake video using her likeness and voice to hawk a weight loss product was created and distributed without her permission. The synthetic AI video was built using a sanctioned post publicizing King’s radio show.
Enterprise executives are concerned about the potential risks associated with malicious synthetic content that can be used to impersonate executives, damage corporate reputations, and extortion. In August 2023, cyber attackers successfully used synthetic AI to replicate an employee’s voice to breach software development company Retool. The hackers initiated the attack using SMS-based text messages that spoofed an IT staffer’s mobile number. Claiming to have a payroll issue, the cyber criminals were able to lure one employee to click on a link in a phishing message.
The URL took the staffer to an illegitimate internet portal that included a multi-factor authentication form through which the attackers used synthetic AI to emulate an actual employee’s voice.
The attackers took over 27 Retool customer accounts, altering user emails and resetting passwords of the cryptocurrency companies. While Retool uncovered and mitigated the breach quickly, at least one client lost $15 million in cryptocurrency.
VMware’s 2022 Global Incident Response Threat Report reported two-thirds of cyber security professionals queried said deepfakes were used as part of a breach against their business, a 13% jump from the prior year. E-mail was most frequently used to deliver the harmful content. With the 2024 presidential election looming, the expectation is there will be an even greater proliferation of deepfakes to spread misinformation and disinformation.
In September 2023, the NSA, the Federal Bureau of Investigation (FBI), and the Cybersecurity and Infrastructure Security Agency (CISA) published guidance on how to identify and respond effectively to malicious synthetic threats. The Contextualizing Deepfake Threats to Organizations document outlines deepfake patterns and techniques. The agencies counsel organizations to take the 18-page document’s recommendations on how to prepare, identify, defend against, and respond to deepfakes as action items.
The document details steps enterprise organizations should take to both detect and respond to harmful synthetic AI. It also outlines public/private collaboration work being done to arm organizations against deepfakes. These include efforts from the DARPA Semantics Forensics program to build advanced semantic capabilities for media forensics and authentication. Contributors include Nvidia, PAR Government Systems, SRI international, and a number of research institutions.

