Back to all posts

AI Deepfakes: The Rise, Risks, and Regulation in 2025

ai deepfakes security policy

images howing someone looking at deepfakes

Introduction

AI deepfakes have entered a new and alarming phase in 2025. From political manipulation to corporate scams and image-based abuse in schools, synthetic media is now at the heart of social, financial, and ethical crises. What once seemed like science fiction has become a geopolitical and domestic security issue. This post explores recent incidents and the growing global effort to detect, prevent, and regulate AI-generated deception.

Deepfakes in Politics and Media

Political Manipulation Heats Up

The use of AI-generated videos in politics crossed a new threshold this week when the National Republican Senatorial Committee released a deepfake attack ad of Senator Chuck Schumer, sparking bipartisan fears about the authenticity of digital campaigning. Experts warn that such uses could flood social feeds in the 2026 midterm cycle, normalizing synthetic misinformation that blurs the line between parody and propaganda.

Cultural Controversies and Content Moderation

Social backlash also followed the OpenAI Sora 2 incident, where users generated deepfakes of Martin Luther King Jr., prompting OpenAI to pause the feature. Civil rights advocates called the depictions “disrespectful,” reigniting debates about generative model moderation and ethical boundaries for historical likenesses.

Deepfakes in Crime and Exploitation

Deepfake Voice and Video Scams Escalate

Law enforcement and cybersecurity agencies are reporting steep increases in voice cloning and video deepfake fraud. According to Resemble AI data, global losses to deepfake scams exceeded $547 million in the first half of 2025, doubling since the previous year. Scammers now impersonate CEOs and colleagues on Zoom calls or phone lines, tricking employees into transferring company funds. The FBI and American Bankers Association have issued new public safety infographics warning consumers about the explosion of AI-driven impersonation scams.

Image-Based Abuse Targets Students

Australia’s eSafety Commissioner reported a wave of deepfake image-based abuse incidents involving digitally altered explicit images of high school students. These cases highlight a global pattern of AI misuse across schools and social media — where accessible image generation tools amplify harassment and reputational harm faster than current moderation systems can respond.

Public Opinion and Legislative Response

Widespread Support for Protection

A Boston University survey found 84% of Americans across party lines support legal protections against unauthorized use of one’s voice or likeness in deepfakes, reflecting rare bipartisan unity around AI accountability. Respondents overwhelmingly endorsed watermarking requirements and the right to license personal likenesses for AI training.

Regulatory Momentum Builds

In the U.S., 301 deepfake-related bills have been introduced across state legislatures in 2025, with 68 enacted so far—primarily targeting sexual and impersonation-related offenses. States like Pennsylvania, Utah, Arkansas, and Montana have passed “digital likeness” laws granting individuals explicit control over their image rights. Meanwhile, the White House’s America’s AI Action Plan calls for a unified national strategy to combat AI-generated misinformation and support watermark detection research.

Ethical and Security Implications

Crisis of Authenticity

UNESCO warns of a mounting “crisis of knowing”, as synthetic content challenges the very notion of truth in digital media ecosystems. When AI-generated videos, audio, and texts replicate human identity indistinguishably, citizens, journalists, and even courts face unprecedented evidentiary challenges. This authentication gap could destabilize democratic processes and interpersonal trust.

Corporate and Infrastructure Risk

Deepfake fraud now extends beyond personal scams. Multinational corporations have reported multimillion-dollar unauthorized transfers due to synthetic executive impersonations on video calls. Financial institutions are deploying AI-based detection systems that analyze facial micro-movements, lip sync drift, and acoustic anomalies — but false negatives remain a problem as generative realism improves.

The Path Forward

The global AI transparency movement is accelerating. Governments, banks, and tech platforms are racing to standardize digital watermarks, detect synthetic media, and criminalize malicious deepfake creation. At the same time, industry coalitions like DeepMedia and organizations such as the FBI and UNESCO are urging “media provenance by default” — embedding verifiable source data into all AI-generated files.

Deepfakes may be the most pressing test of public trust in the AI era. Whether societies respond with technical innovation or suffer narrative disintegration will depend on how swiftly we can align regulation, ethics, and public literacy.


References and Further Reading

  1. NPR — A GOP attack ad deepfakes Chuck Schumer with AI (Oct 2025) https://www.npr.org/2025/10/17/nx-s1-5578279/ai-schumer-gop-attack-ad
  2. Fortune — OpenAI pauses AI-generated deepfakes of Martin Luther King Jr. on Sora 2 (Oct 2025) https://fortune.com/2025/10/17/openai-sora-martin-luther-king-deepfakes-foolishness-direspectful
  3. ABC News — Deepfake image-based abuse doubles across Australia (Oct 2025) https://www.abc.net.au/news/2025-10-17/deepfake-image-based-abuse-doubles-across-australia
  4. ABC News — Police investigate explicit deepfakes made of Sydney schoolgirls (Oct 2025) https://www.abc.net.au/news/2025-10-16/police-investigate-sydney-school-image-deepfake
  5. Al Jazeera — AI now sounds more like us — should we be concerned? (Oct 2025) https://www.aljazeera.com/news/2025/10/6/ai-now-sounds-more-like-us-should-we-be-concerned
  6. UNESCO — Deepfakes and the crisis of knowing (Sept 2025) https://www.unesco.org/en/articles/deepfakes-and-crisis-knowing
  7. Boston University — Big margins support protections against AI-powered deepfakes on social media (Aug 2025) https://www.bu.edu/com/articles/big-margins-support-protections-against-ai-powered-deepfakes-on-social-media-survey-finds
  8. RILA — AI Legislation Across the U.S.: A 2025 End of Session Recap (Aug 2025) https://www.rila.org/blog/2025/09/ai-legislation-across-the-states-a-2025-end-of-ses](https://www.rila.org/blog/2025/09/ai-legislation-across-the-states-a-2025-end-of-ses)
  9. ABA Foundation and FBI — Joint Infographic on Deepfake Scams (Sept 2025) https://www.aba.com/about-us/press-room/press-releases/aba-foundation-and-fbi-joint-infographic-on-deepfake-scams
  10. CNN — Deepfake scams can cheat companies out of millions (Oct 2025) https://www.cnn.com/2025/10/07/business/deepfake-scam-ai-zoom-call-digvid