
A recent report by the Tech Transparency Project (TTP) has revealed a dramatic rise in sophisticated deepfake political scam ads circulating on Meta’s platforms, particularly Facebook and Instagram. Watchdog groups are sounding the alarm over the company’s enforcement practices, describing them as reactive rather than preventive. This approach, they argue, enables fraudulent campaigns to thrive while continuing to exploit vulnerable users, especially senior citizens.
Watchdog Findings: Meta Deepfake Ads
According to the TTP, 63 scam advertisers were identified, collectively spending an estimated $49 million on Meta’s ad systems.
These groups deployed artificial intelligence to craft deepfake videos of prominent political figures such as Donald Trump and high-profile personalities like Elon Musk. The ads carried false promises ranging from stimulus payments to government aid, luring unsuspecting users into fraudulent traps.
What Tactics Are Scammers Using
Scammers impersonating public figures have become a disturbing new tactic. Investigators uncovered troubling examples, including a manipulated Trump video that appeared to promise citizens a $5,000 government check. This video directed viewers to malicious websites designed to steal personal information and facilitate fraudulent payments.
Targeting was precise as well: the TTP discovered that many of these ads were aimed directly at older adults across more than 20 U.S. states. To heighten effectiveness, messaging was even tailored by region and age group, exploiting psychological vulnerabilities to maximize impact.
Meta’s Efforts Against Deepfakes
Meta’s enforcement record, however, highlights serious shortcomings. While the company confirmed it had removed deceptive accounts for policy violations, the TTP documented systemic loopholes. Nearly half of the perpetrators were able to resume advertising simply by creating new or modified profiles. Removal, rather than prevention, continues to be the dominant strategy.
Although 35 ad accounts were eventually disabled, many had already run dozens, if not hundreds, of fraudulent campaigns before takedown. Alarmingly, six accounts spent more than $1 million each before any action was taken. Even Meta’s political ad verification process (requiring ID and proof of address) proved ineffective, as scammers repeatedly bypassed safeguards.
The risks extend far beyond immediate scams. Deepfake ads erode public trust by blurring the line between truth and misinformation, corrode discourse, and push victims into financial loss. Regulators, meanwhile, are intensifying scrutiny, urging Meta to accept greater responsibility for AI misuse.
In its defense, Meta insists it prohibits deceptive ads and is bolstering detection algorithms, account vetting, and enforcement tools. Yet critics remain skeptical, urging a proactive approach that limits deepfake tools in ads, enforces transparency, and improves real-time detection.
Without proper measures, deepfake scams will continue to flourish and vulnerable communities will remain at risk.