Generative AI is shaking up white-collar work, and recruiting already feels the pain. What was once a sphere committed to optimizing efficiency and job-candidate fit has taken a sharp turn. The bigger worry now? Application fraud done by AI.
A Software Finder survey[1] released recently paints the picture starkly: recruiters are being hit by a barrage of fabricated resumes, AI-generated portfolios, and deepfake interviews. As these aspects grow more realistic, the entire hiring process, based on honesty and identity, is under threat.
Recruiters Already Get Fakes
The survey gathered opinions from 874 recruitment professionals. What they had to say confirms what most suspected: AI-powered falsification already dominates.
- 72% have had AI-generated resumes
- 51% have received fabricated work samples or portfolios
- 15% had deepfake or face-swapped video interviews
- 17% had changed voices or audio filters
Even with those statistics, 75% of recruiters feel confident that they can identify AI-aided candidates themselves. But it may be wishful thinking. Almost half already mark or eliminate candidates over suspected AI use, and 40% rejected applicants over identity issues.
Some applicants are using AI just to tighten up spelling or formatting. Others are taking advantage of the entire system, faking identities, creating fake voices, or submitting portfolios that they never created. The list of tactics is long and expanding fast.
Where It Hurts Most: Tech, Marketing, and Creative Jobs
Some industries are more at risk than others. According to the statistics, recruiters in some of the leading industries are seeing more AI abuse:
- Tech: 65% say it’s most under siege
- Marketing: 49% said exposed
- Creative/design: 47% say they’re being tampered with frequently
These roles become reliant on digital deliverables, portfolios, campaigns, sample code. It is so easy to forge these using AI. It only takes a designer minutes to build an AI-made portfolio. A coder can supply code copied from GitHub Copilot. Marketers are providing AI-created ad copy or brand decks.
And it’s not only the materials. The presentation is professional-looking. Add that to remote interviews and aggressive hiring schedules, and it’s no wonder that AI-hybrid candidates are falling through.
These technologies are no longer niche. Browser applications can mimic speech, imitate facial expressions, and create entire fake profiles. What used to demand advanced skill is now achieved with a decent Wi-Fi network.
Detection Tools Are Still Playing Catch-Up
Even though the threat is growing, most companies are not set to spot it effectively. This is how things stand today:
- Only 31% use software to spot AI or deepfake material
- 66% still use manual screening
- 53% have third-party background checks
- Only a third possess applicant tracking systems (ATS) that can spot AI-based deceit
And training? That’s very thin. Close to half of HR professionals haven’t received any training on how to spot AI fake news. Only 15% indicate their company will offer such training in the near term.
Things might improve, as 40 percent of companies report that they intend to spend on detection software within the next year. But for the moment, the disparity is evident: AI is developing more rapidly than the software intended to halt it.
So why the lag? Budgets, uncertainty, and risk. HR leaders worry about false positives, or that tools won’t keep pace with AI’s evolution. Others aren’t even sure what counts as unethical AI use. Should a resume rewritten by ChatGPT be disqualified? Should candidates be required to disclose that? Most companies don’t have a policy, and that leaves too much open to interpretation.
Should Job Platforms and Lawmakers Step In?
Hiring managers won’t shoulder this responsibility alone. Most think platforms and regulators must assist in getting the standards tighter and verifying more. This is where agreement is building:
- 65% would support mandating live-only interviews
- 54% would want stricter background checks
- 39% would support third-party video identity verification
- 37% would prefer biometric or facial verification as protection
And it’s not all on employers. A majority think platforms such as LinkedIn, Indeed, and others should be doing more:
- 65% say platforms should help identify AI-generate candidates
- 62% favor mandatory disclosure of AI use on applications
- 56% would pay extra for recruitment software with in-app fraud detection
The conversation is evolving. Recruiters now don’t consider this just an HR issue. It’s becoming a systemic problem that platforms, vendors, and governments need to address in unison.
And the law may already be lagging. With AI software becoming less expensive and more realistic, authenticating a candidate’s identity may eventually need legislative control, since single employers cannot stay current themselves.
Resume Faking Heads the List of Risks
For all AI-facilitated dishonesty, resume forgery is the greatest threat:
- 63% of recruiters identify AI-supercharged resumes as the greatest risk
- 37% consider deepfaked video interviews to be more perilous
That’s probably because recruitment continues to rely on paper products. Resumes, cover letters, writing samples, all easily doctored or faked with AI.
But video manipulation is coming up fast. More and more companies are embracing remote interviews and asynchronous video platforms. As that movement keeps going, AI-enhanced voice and face manipulation will become more common, and harder to detect.
Even seasoned recruiters will say it’s becoming increasingly difficult to catch deepfakes. That increases the chances of ill hires, legal issues, and image damage. An imposter employee brought on board under false claims can waste time, resources, and company culture.
And the barrier to entry keeps dropping. High-quality fakery no longer requires special software. Most of it runs in your browser, or through apps on your phone. What was once rare is now becoming the new norm.
Trust Is the Real Casualty
A whopping 88% of recruiters believe AI fraud will reshape hiring practices within five years. But be honest, it’s already happening now.
Recruiters claim to rely on intuition, but the truth is more ambiguous. Few have had any formal training. Tools are missing or insufficient. Internal procedures are hazy at best. And AI-generated content is becoming increasingly difficult to tell from the real thing.
As AI improves at simulating people, it’s easier to deceive. And that strikes at the core of hiring: trust.
Here’s what businesses can begin to do today:
- Deploy detection software that identifies red flags early
- Train hiring managers and recruiters to spot suspicious activity
- Develop internal policies on what types of AI applications are permissible
- Consult with platforms and attorneys to establish wiser policies
All AI usage is not nefarious. Some applicants utilize it to correct grammar or rephrase a synopsis. Other applicants, however, utilize it to create completely fictitious professions. Clear policies help recruiters to define the line and hold standards even across the board.
The more profound change? Redefining what “authentic” means to us.
Zoom interviews and hunches won’t do the job anymore. If AI can forge resumes, faces, voices, and even work histories, authentication needs to be part of the hiring process, not an afterthought.
The old way of hiring was supposed to be about finding the best candidate. Today, it’s also about ensuring the best candidate is actually real.
Read next:
• Tree Planting Overhyped: Study Warns Forests Cannot Replace Fossil Fuel Cuts[2]
• Google’s Danny Sullivan Reminds Site Owners That SEO Basics Still Count in the Age of AI Search[3]
References
- ^ Software Finder survey (softwarefinder.com)
- ^ Tree Planting Overhyped: Study Warns Forests Cannot Replace Fossil Fuel Cuts (www.digitalinformationworld.com)
- ^ Google’s Danny Sullivan Reminds Site Owners That SEO Basics Still Count in the Age of AI Search (www.digitalinformationworld.com)