ScamWatch

If you feel you're being scammed in United States: Contact the Federal Trade Commission (FTC) at 1-877-382-4357 or report online at reportfraud.ftc.gov

Employer’s Playbook: Detecting Deepfake Candidates, Synthetic Resumes and Bot‑Run Interviews

Crop anonymous ethnic woman passing clipboard to office worker with laptop during job interview

Introduction: Why hiring teams must treat candidate authenticity as a security problem

The rise of generative AI means hiring teams now face not only embellished resumes but fully fabricated candidate identities, deepfake video interviews and bot‑run screening calls. These attacks range from single fraudsters using stolen PII to organized rings that manufacture synthetic applicants to exploit onboarding workflows and payroll systems. Employers that assume traditional vetting is sufficient risk payroll fraud, stolen intellectual property, and legal exposure.

Federal and industry groups have flagged this as an increasing threat: regulators are proposing new rules to curb AI‑enabled impersonation and government agencies (including the FBI’s IC3) have warned about deepfakes used to apply for remote work.

Red flags and resume‑screening checklist

Start by treating candidate screening as layered risk control. The following indicators should trigger closer verification or escalation:

  • Inconsistent timelines: employment dates, overlapping roles, or sudden gaps with high‑responsibility claims.
  • Over‑optimized keyword resumes: many buzzwords or verbatim job descriptions without concrete deliverables.
  • Stock or mismatched photos: profile photos that reverse‑image search to other identities or stock images.
  • Unusual availability: insistence on asynchronous interviews (pre‑recorded) or refusal to join a live video call.
  • Rapid gig onboarding: applicants who accept offers and immediately request access or payouts.

Practical tip: augment manual screening with targeted tools and platforms that flag anomalies (IP location mismatches, reused email domains, or known malicious indicators). Industry HR guidance and trade groups recommend combining technical checks with human verification during interviews.

Technical detection steps, live‑verification best practices and tools

Use layered technical controls that are feasible for your team — detection is rarely a single‑tool answer.

Before the interview

  • Reverse image search profile photos and candidate CV attachments.
  • Validate email domains and phone numbers; flag disposable/forwarding providers.
  • Run basic background and employment verification before extended technical assessments.

During the interview

  • Require a supervised live video interview (not a pre‑recorded clip) and ask for specific, verifiable artifacts in real time (e.g., show government ID next to face, perform a short, unpredictable gesture).
  • Use interview platforms that bind sessions to IP addresses or that prevent replay and enforce webcam presence for remote technical tasks.
  • Sample technical tasks performed live (pair programming, whiteboard problems) with screensharing — require live input that’s difficult for bots to fake.

After the interview — forensic checks

  • Inspect video for telltale artifacts: inconsistent lighting, mismatched lip sync, unnatural blinking or micro‑expression anomalies (advanced fakes may escape casual detection).
  • Request raw files or session logs from your interview vendor; use forensic or vendor detection reports when available.

On the technology front, detection systems and benchmarks continue to evolve. National labs and research groups are producing standardized evaluations and detection datasets that improve model robustness, but no automatic detector is perfect — combine tool outputs with human review.

Policies, escalation and reporting: an operational playbook

Turn detection into policy with clear playbooks and escalation paths. Below is a compact operational checklist HR and security teams can adopt immediately.

WhenActionWho
Resume or profile fails reverse image or domain checksPause processing; request additional ID and two professional references; verify references independentlyRecruiter + Background Check Vendor
Candidate refuses live verification or provides pre‑recorded video onlyRequire supervised live interview or in‑person meeting; escalate if refusal persistsHiring Manager
Interview shows deepfake artifacts or vendor flags manipulationRetain session logs, preserve evidence, run forensic review, block candidate accountSecurity/Legal

Reporting and legal steps

  • Report fraud and impersonation incidents to the FTC and, when relevant, to law enforcement or IC3. Maintain an internal incident log for HR and security teams.
  • Follow existing identity‑theft prevention guidance (e.g., the Red Flags Rule) when handling applicant PII and create a written program for suspected identity theft scenarios.

Conclusion — Practical next steps

  • Update job‑posting workflows to limit unsolicited hires and require verified employer emails for offers.
  • Implement a short candidate‑authenticity checklist for every hire (reverse image search, live supervised interview, vendor flag cleared).
  • Train recruiters to spot AI‑generated language (overly generic accomplishments, improbable achievements) and to escalate suspicious cases early.

Deepfakes and synthetic identities are an evolving threat: pair technical detection with clear HR policies, and test your hiring controls regularly. For more resources on best practices and reporting see the FTC and industry HR guidance.