• ERE Weekly
  • Posts
  • The Arms Race in Candidate Fraud and Detection

The Arms Race in Candidate Fraud and Detection

AI fraud in hiring has moved from edge case to everyday risk.

Earlier this year, Gartner found that 6% of surveyed candidates had admitted to participating in interview fraud — either posing as someone else or having someone else pose as them in an interview. They predict that by 2028, a quarter of all candidate profiles worldwide will be fakes.

The problem may be growing even faster than that. According to Greenhouse’s AI in Hiring Report, 91% of US hiring managers and 89% in Europe report that they have either suspected or caught AI-driven candidate misrepresentation.

In the US, Greenhouse found that the most common forms of AI-enabled fraud are:

  • Fake voices or backgrounds: 32%

  • AI scripts during interviews: 32%

  • Deepfakes: 18%

Recruiting technology providers are responding by adding fraud detection as a core part of their products. In most, AI is both the problem and the solution.

LinkedIn‘s recent Hiring Assistant rollout emphasized their 90 million verified profiles. Greenhouse announced Greenhouse Real Talent earlier this year, with a partnership with CLEAR that similarly verifies candidate identity. Covey scores inbound candidates and flags high-risk applications.

Ezra, an eight-month-old startup, adds a screening layer at the top of the funnel via its AI interviewer product. Candidates are invited into a short voice interview with an AI agent that asks structured questions and scores the responses.

Cheat detection is a core part of the product. Ezra analyzes delivery patterns to flag whether a candidate is likely reading from a script. According to CEO Ophir Samson, roughly half of candidates in some use cases are now reading out ChatGPT generated answers during interviews.

Samson shared a video from a “candidate” who was caught in the wild using an AI avatar to attend an interview, and highlighted four specific tells that, taken together, gave the fake away: 

  • Unnatural hand movements. The avatar used its hands in ways that felt slightly exaggerated, just a bit off from normal human gesturing.

  • Clothing that did not behave correctly. As the figure moved, the clothes did not crease or fold the way real fabric would. They stayed unnaturally smooth and static.

  • AI style fingers. The fingers, especially the ring fingers, were subtly too long compared to normal human proportions, a familiar tell in AI generated imagery.

  • Lip-sync drift over time. As the conversation went on, the mouth movements fell gradually out of sync with the audio.

None of those signals alone would necessarily trigger alarm on a busy day of interviewing, especially since the avatar had a thick accent, making it tough to zero in on verbal cues. Taken together, they form a recognizable pattern, but as AI gets more sophisticated it is only going to get harder to spot. 

According to the Greenhouse report, 87% of US recruiters say they have tightened their screening process in some way. At the ERE Recruiting Innovation Summit, Anita Chandrasekhar, Global Head of Talent Strategy and Operations at Zapier, shared the multi-pronged approach that the company has taken, with safeguards at every stage in the funnel:

  • At application. Sanity checks on work history, IP and location comparisons, the age and consistency of social profiles, and patterns like candidates claiming to have worked at a company longer than the company has existed.

  • At interview. Camera-on expectations, recording or AI note-taking tools, and explicit training for recruiters and hiring managers on red flags such as typing sounds before answers, long pauses that match “listening to a script” behavior, or inconsistencies in accent or appearance between stages.

  • At offer and onboarding. Identity verification that goes beyond a selfie, tighter control of where equipment is shipped, and background checks that look for identity reuse or other anomalies.

The goal is to make it increasingly difficult to get through multiple layers undetected.

Chandrasekhar also emphasized the importance of explaining the “why” behind new controls. Candidates are more willing to accept intrusive camera requirements, recordings, and identity checks when they understand that these policies are there to protect both sides from fraud.

AI has made it easier than ever to fake a resume, a voice, or even a face on a screen. It’s also part of the solution to spotting that fakery at scale.

David

P.S. If you found this newsletter valuable, chances are your colleagues will too. Feel free to forward it along—and if it landed in your inbox by way of a friend, you can subscribe here to get the next one directly.

Webinars

From Research to Results: Inside Recruiter Nation 2025

December 3, 2025 | 12:00 PM EDT | 1 Hour

Join us as we unpack what more than 1,200 talent-acquisition professionals revealed about today’s recruiting realities — and how you can turn those insights into a winning strategy for 2026.

We’ll share the data behind today’s candidate-supply challenges, tech-stack shake-ups, the rise of skills-first hiring, AI-adoption hurdles, and more. Whether you’re a TA leader or practitioner, this session is your chance to move beyond the headlines and get actionable findings to sharpen your hiring game. (ERE)