ScamWatch

If you feel you're being scammed in United States: Contact the Federal Trade Commission (FTC) at 1-877-382-4357 or report online at reportfraud.ftc.gov

Legal & Reporting Routes for AI-Impersonation Victims in the U.S. (2025)

Two professionals engaging in conversation over coffee during a business meeting.

Introduction — Why this matters now

AI-generated impostor media (deepfakes, voice clones and synthetic videos) are now routinely used to harass, extort and scam people — from fake job interviews to impersonation calls and non-consensual intimate images. That surge has produced a patchwork of state laws and several important federal developments in 2024–2025, so victims need both immediate practical steps and a basic legal roadmap to protect themselves and to preserve options for enforcement or civil claims.

This article summarizes what works in the U.S. in 2025: where to report, how to document evidence, which federal and state authorities can help, and the typical civil-law claims lawyers use to force takedowns or collect damages.

Immediate actions: preserve evidence and stop the spread

If you discover an AI-generated or impersonating item that harms you (fake audio, a deepfake video, or a cloned identity), act quickly. Preserve copies, URLs and metadata before anything is removed:

  • Take screenshots and screen recordings (include timestamps).
  • Save the original link(s), page source, and any messages or emails used to share the content.
  • Download the file if possible and note account names, profile URLs, and the platform’s content ID (if shown).
  • Preserve your account activity logs, device logs, and email headers if an account was breached or used.

Do not negotiate with extortionists, and do not publicly repost the content — reposting may spread it further and complicate legal steps.

Where to report first: platforms, specialized services, and law enforcement

Reporting must usually go to multiple places: the hosting platform, specialized NCII services (for intimate images), and law enforcement. Use in-app reporting for the platform where the content appears first — social networks have dedicated forms and NCII pathways — and ask for a takedown receipt or case number.

For non-consensual intimate images (including AI‑generated sexual images), use StopNCII (a hash‑matching takedown service) and, if a minor is involved, the National Center for Missing & Exploited Children (NCMEC) CyberTipline. These services help coordinate removal and prevention across participating platforms.

For fraud, impersonation or extortion that involves financial loss or criminal threats, file a complaint with the FBI’s Internet Crime Complaint Center (IC3). IC3 is the standard federal entry point for online fraud and helps route information to local and federal investigators. Filing also helps authorities detect trends and may be required by banks or insurers.

To report consumer‑protection or platform‑compliance issues (including platform failures to remove NCII under new federal rules), contact the Federal Trade Commission (FTC). The FTC has focused on AI‑enabled impersonation and has proposed rules and enforcement aimed at preventing impersonation scams and unfair practices by AI vendors.

Legal remedies available in 2025 — federal and state paths

Federal and state law options are both important. In 2025 Congress passed a federal law commonly known as the TAKE IT DOWN Act, which criminalizes the non-consensual publication of intimate images (including AI-generated images) and requires covered platforms to remove flagged content — generally within a rapid window. The FTC is empowered to enforce platform compliance under that statute. This provides a new federal enforcement path for victims of non-consensual intimate imagery.

At the same time, states continue to expand criminal and civil tools. Some states (for example, Washington) have broadened criminal liability for malicious deepfakes and forged digital likenesses; other states (including New Jersey and Tennessee) have passed statutes targeting unauthorized voice or likeness cloning and non-consensual explicit material. That creates variation in penalties and civil remedies depending on where the harm occurred or where the speaker/host is located. If the offender is in a state with a strong deepfake statute you may have additional criminal and civil leverage.

Civil claims commonly used by victims include:

  • Invasion of privacy / public disclosure of private facts (state torts).
  • Defamation (if the deepfake asserts false, reputationally damaging facts).
  • Right of publicity / unauthorized commercial use of likeness (where states recognize that claim).
  • Intentional infliction of emotional distress (for severe harassment or extortion).
  • Copyright claims or DMCA takedowns (if the attacker used your copyrighted photo without permission).

Because platforms and creators may be anonymous, civil litigation can be used to obtain subpoenas and preservation letters to compel platforms, hosts, registrars and payment processors to reveal account data and remove content. Talk to a lawyer about emergency relief (ex parte preservation orders or temporary restraining orders) if quick action is needed to stop further dissemination.

Practical checklist for victims and next steps

  1. Document everything: screenshots, URLs, timestamps, messages, receipts, and payment records.
  2. Report to the platform where the content appears and save the platform’s response or case number.
  3. If the content is sexual and non‑consensual, submit a StopNCII case and, if a minor is involved, file a CyberTip with NCMEC.
  4. If the incident involves impersonation leading to fraud or extortion, file with IC3 and notify your bank or payment provider immediately.
  5. Report to the FTC if you suspect a platform failed to remove NCII or if an AI vendor is facilitating impersonation or scams. The FTC has been active on AI deception and is building enforcement tools targeting impersonation.
  6. Consider urgent legal help to issue preservation subpoenas and takedown notices; if you can’t afford a lawyer, contact victim‑support hotlines like the Cyber Civil Rights Initiative hotline for NCII victims for referrals and help.

Keep a private, secure log of all outreach (platforms, law enforcement, attorneys) and avoid interacting with the attacker. If there is an immediate threat to your safety, contact local law enforcement.

Limitations, enforcement reality and what to expect

Even with newer federal and state laws, enforcement can be slow or practically limited. Platforms may remove content but fail to block re‑uploads immediately; cross‑border takedown is difficult if a host is outside U.S. jurisdiction; and civil lawsuits are costly and can draw more publicity. Still, a combined approach — platform reporting, StopNCII/NCMEC hashing, IC3/FTC complaints, and targeted civil or criminal filings when appropriate — gives victims the best chance to halt spread and hold perpetrators accountable.

Keep in mind: legal changes in 2024–2025 (federal NCII/criminalization measures and state statutes) improved the tools available to victims, but legal strategy depends on the facts: whether a minor is involved, whether money changed hands, and where the actor or host is located.

Helpful resources & contacts

  • StopNCII — automated hash-based takedown for non-consensual intimate images.
  • NCMEC CyberTipline — for reports involving minors.
  • IC3 (FBI Internet Crime Complaint Center) — to report scams, impersonation and extortion.
  • FTC — report deceptive practices or platform compliance problems with AI impersonation.
  • Cyber Civil Rights Initiative (CCRI) — victim support and referrals for NCII victims (hotline and legal resources).

If you want, we can: (1) provide a printable evidence checklist you can use when reporting; (2) create a tailored takedown email template for platforms and hosting providers; or (3) list state-specific statutes and enforcement offices based on the state(s) involved in your case.