Key Points
- Britain experienced approximately eight million deepfake incidents in 2025, representing a nearly 400% increase from 2023 levels
- The gambling industry saw fraud increase by 73% from 2022 to 2024, with synthetic media enabling identity verification bypass
- A 2025 analysis concluded British law enforcement lacks adequate resources to combat AI-driven fraud
- Internal Meta figures revealed approximately $16 billion in 2024 advertising revenue originated from fraudulent schemes and prohibited products
- While deepfake regulations under the Online Safety Act are being developed, enforcement powers for scam advertising won’t arrive before 2027
Britain is confronting an unprecedented wave of synthetic media fraud powered by artificial intelligence, with regulatory mechanisms lagging far behind the threat’s evolution. Evidence continues mounting that deepfake scams have achieved industrial-scale operations, with the online gambling sector bearing particularly severe consequences.
Approximately eight million deepfake instances circulated across the UK during the past year. This represents a nearly fourfold escalation from figures documented in 2023, based on data from the Home Office’s Accelerated Capability Environment.
Research compiled in 2026 by the AI Incident Database characterized this fraud category as having reached “industrial” proportions. Fred Heiding, who studies AI-enabled scams at Harvard University, issued a stark warning: “the worst is yet to come.”
The online betting and gaming sector has experienced particularly devastating impacts. According to analysis from Gambling IQ, an industry intelligence provider, fraud within this sector jumped 73% during the two-year period spanning 2022 to 2024.
Criminals are deploying deepfakes to circumvent Know Your Customer verification protocols and execute large-scale bonus exploitation schemes across gambling platforms. These technologies enable fraudsters to create highly convincing impersonations through advanced voice synthesis and manipulated video content.
Enforcement Capabilities Lag Behind Threat
An analysis published in 2025 by the Alan Turing Institute determined that British law enforcement operates with “inadequate equipment to address AI-fuelled fraud.” Joe Burton, Professor of Security and Protection Science at Lancaster University, authored the assessment.
Burton delivered an unambiguous evaluation. “AI-enabled crime is already inflicting substantial personal and societal damage alongside significant financial losses,” he stated.
He advocated for equipping law enforcement with enhanced capabilities to dismantle criminal networks. Failing this intervention, he cautioned, criminal exploitation of AI technologies would accelerate dramatically.
The UK Gambling Commission assigns primary fraud prevention responsibility to licensed operators. These operators must develop and implement their own fraud detection policies and control mechanisms.
Yet as AI capabilities evolve at breakneck speed, platforms operating in isolation cannot adequately address the challenge. Numerous AI-facilitated scams targeting the gambling ecosystem originate completely outside regulated platform environments.
Social media networks serve as primary distribution channels for these fraudulent schemes. Platform recommendation algorithms can inadvertently amplify deceptive content through design choices that emphasize user engagement above content verification.
Reuters reported in November 2025 that Meta’s internal accounting indicated approximately 10% of its 2024 revenue stream—roughly $16 billion—originated from advertisements connected to fraudulent operations and prohibited merchandise.
Just last week, Reuters documented that Meta failed to remove scam content from its UK operations more than 1,000 times during a seven-day period. The scams included illegal online casinos deploying deepfake technology to lure potential victims.
Regulatory Response Remains Slow
Ofcom has begun drafting regulatory frameworks to govern deepfakes under both the 2023 Online Safety Act and the 2025 Data Use and Access Act. However, the regulator’s published guidance exposes significant limitations in existing oversight structures.
Certain AI chatbot implementations remain completely outside regulatory jurisdiction. These operate as self-contained systems that don’t qualify as search services or platforms facilitating user-to-user communication.
Though the Online Safety Act initiated enforcement in March 2025, authority to address paid fraudulent advertising has been postponed until 2027 at the earliest. This leaves enforcement reliant on voluntary cooperation from companies such as Meta.
Neither the Financial Conduct Authority nor Ofcom possesses direct intervention authority over these advertisements currently. Content generated without external data sources, including synthetic images and fabricated videos, frequently escapes oversight unless meeting narrow definitional criteria.
The consequences of deepfake scam proliferation continue falling primarily on platforms and individual users, despite the fact that the technological systems enabling these risks exist beyond their meaningful control.


