An imposter reportedly used artificial intelligence (AI) to replicate the voice of U.S. Secretary of State Marco Rubio in an effort to deceive at least five senior government officials.
The impersonation involved contacting three foreign ministers, one U.S. governor, and a member of Congress using Signal, a messaging app favored by diplomats and officials. “The sender used the display name marco(dot)rubio(at)state(dot)gov, which is not a functional email address, as their Signal nickname,”.
Authorities have yet to determine who is behind the scheme, but the cable suggests the intent may have been “with the goal of gaining access to information or accounts.” The State Department has acknowledged the breach and noted it is enhancing cybersecurity efforts, though it declined further comment on the ongoing investigation.
“This fraud highlights how deepfake technology has matured beyond easily detectable viral videos into seamless audio and text deception,” said Leah Siskind, an AI research fellow at the Foundation for Defense of Democracies, in a statement shared with Security Management. “We’ve seen other instances of deepfakes of senior government officials used to gain access to personal accounts, but leveraging AI to influence diplomatic relationships and decision-making is a dangerous escalation. This is an urgent national security issue with serious diplomatic ramifications.”
Rubio’s use of Signal has been documented previously, including an incident in which The Atlantic’s editor-in-chief, Jeffrey Goldberg, was mistakenly added to a chat discussing sensitive U.S. foreign policy matters.
This episode adds to a growing list of deepfake-related impersonations targeting U.S. officials. In May 2025, a similar case involved someone posing as White House Chief of Staff Susie Wiles. A year earlier, a deepfake video of State Department spokesperson Matthew Miller made the rounds online.
“Hany Farid, a professor at the University of California at Berkeley who specializes in digital forensics, said operations of this nature do not require sophisticated actors, but they are often successful because government officials can be careless about data security,” the Post said.
Siskind added, “With just 30 seconds of recorded audio, an adversary can create a convincing audio clip impersonating someone else. Given how frequently our senior officials speak in public, this means anyone could be at risk of similar scams.”
Point of View (POV):
This case marks a serious escalation in the use of AI for geopolitical subterfuge. Deepfakes are no longer just a viral novelty—they’re now a tangible national security threat capable of targeting decision-makers with credible imitations. The incident highlights an urgent need for stricter identity verification protocols and deeper awareness among public officials, who increasingly face AI-powered social engineering threats. As generative AI tools become more accessible, even low-level adversaries can orchestrate high-impact deception.






