Hiring has always had its blind spots, but the age of AI has opened up some brand new challenges. Essentially, candidates are no longer walking into interviews alone. Many are bringing ChatGPT, coding copilots, or even deepfake technology with them. So, they might breeze through a technical test or deliver flawless interview answers…but once hired, the truth often comes out: the skills don’t match the résumé.
It’s not just about productivity losses. Companies are also facing security risks when technical interviews themselves become attack vectors. Hackers have disguised malicious code inside DevOps challenges, using the hiring process as a way to get behind a company’s firewall.
These new realities are forcing companies to ask tough questions: How can we vet candidates in the age of AI? Do we return to in-person interviews? Do we build smarter, layered approaches that keep pace with both AI innovation and AI deception?
In this post, we’ll explore how AI is reshaping candidate vetting, the risks you can’t ignore, and the strategies leading companies are using to keep hiring both secure and effective.
The rise of AI-assisted cheating in job interviews
AI has changed the way candidates prepare for jobs. And, in some cases, how they perform during interviews. From coding tests to behavioral questions, tools like ChatGPT, GitHub Copilot, and even AI-driven voice or video manipulation are giving applicants an unfair representation of themselves and their skills.
Some of the most common tactics include:
- AI-powered coding assistants that generate real-time solutions during technical interviews.
- Voice clones and deepfake avatars that can manipulate responses or even impersonate candidates in virtual interviews.
- Proxy interviewers — sometimes aided by AI — answering questions on behalf of the applicant.
- Browser extensions and screen blockers that mask external AI usage.
The scale of the problem is striking:
- 81% of FAANG interviewers now suspect AI cheating during interviews, and a third have caught it firsthand.
- Some candidates can ace technical tests with AI help, but once hired, they fail to perform even basic tasks.
- Entire tools and platforms — like Interview Coder or Cluely — now openly market themselves as ways to cheat interviews.
The outcome is more than just wasted time. Companies are at risk of onboarding employees who lack the core skills for the role which leads to turnover, productivity losses, and some serious cultural damage.
Security risks hidden in technical assessments
AI-assisted cheating isn’t the only threat in today’s hiring landscape. Increasingly, the technical interview itself can become a security risk. Hackers have learned to exploit coding challenges and DevOps assessments as handy backdoors into company systems.
Real-world incidents include:
- Malicious code disguised as interview tasks: Attackers embed obfuscated scripts into repositories or test files. When unsuspecting candidates run the code, it collects sensitive information or installs malware.
- DevOps exploit chains: Fake projects contain malware hidden in NPM or Python packages, enabling attackers to steal credentials, harvest files, or gain persistent access.
- Platform vulnerabilities: Flaws like insecure deserialization, unsafe file uploads, or SQL injection have been found in interview environments, creating opportunities for unauthorized access.
Cybersecurity experts warn that even well-meaning companies may inadvertently expose themselves by using unvetted or outdated interview platforms. Best practices include sandboxing candidate code, monitoring abnormal activity, and never requesting credentials or sensitive company data during assessments.
The takeaway is clear: vetting candidates doesn’t just mean evaluating skills — it means securing the vetting process itself.
Evolving vetting methods beyond the interview
Traditional interviews may no longer be enough to ensure candidates have the skills and integrity to succeed. To combat AI-assisted cheating and strengthen decision-making, companies are moving toward multi-layered, skills-based vetting methods.
Some of the most effective approaches include:
- Skills-based assessments: Standardized coding challenges, simulations, or case studies that measure job-specific competencies. Many platforms now randomize or customize questions to make cheating harder.
- Supervised live coding: Pair programming or real-time problem-solving sessions with an interviewer, which reveal both technical reasoning and communication skills.
- Probationary projects: Short-term, paid trial assignments that simulate real job conditions, providing a clear view of both technical output and teamwork.
- Group exercises: Collaborative coding or problem-solving tasks that surface how candidates work with others under pressure.
- Soft skills assessments: Structured behavioral interviews or AI-assisted evaluations that measure communication, adaptability, and cultural fit.
Many employers are adopting hybrid vetting models — combining virtual efficiency early on with in-person or highly supervised tasks in later stages. This layered approach reduces reliance on résumés or “gut feel” and makes it harder for AI or proxy interviewers to slip through undetected.
The return of in-person (and hybrid) interviews
With AI tools making virtual interviews easier to game, many companies are rethinking how much trust they can place in remote-only hiring. Industry leaders like Google, Cisco, and McKinsey have already reinstated at least one in-person interview round for technical and consulting roles.
The shift is backed by clear trends:
- Recruiters report a 5–6x increase in demand for face-to-face meetings compared to previous years.
- Gartner predicts that by 2028, up to 25% of candidate profiles could be fake, prompting widespread adoption of in-person or heavily monitored hybrid interviews.
- Surveys show a 20–35% rise in in-person interview components across technical and executive roles in just the past year.
Still, cost, logistics, and candidate preferences make a full return to in-person hiring unlikely. Instead, hybrid models are emerging as the norm:
- Early stages leverage virtual interviews for reach and efficiency.
- Final rounds include in-person or closely monitored assessments to confirm identity, verify skills, and reduce AI manipulation risks.
The future of candidate vetting will likely be layered, balancing convenience with credibility, speed with security.
Building a smarter vetting strategy
The age of AI has made one thing clear: no single interview format is enough. Between cheating tools, security risks, and the limits of traditional methods, companies need a more resilient approach to vetting.
A smarter strategy means layering different checks at different stages:
- Use skills-based assessments to test real-world ability.
- Add supervised or live sessions to confirm independent problem-solving.
- Incorporate probationary projects or trial tasks to validate skills in practice.
- Weave in soft skills evaluations to ensure candidates can collaborate, communicate, and adapt under pressure.
The future of hiring lies in layered, secure, and adaptive vetting strategies — ones that combine technical, behavioral, and security-focused checks. But no company should face this challenge alone. The hiring landscape is too complex, and the risks are too high.
Hiring teams are already stretched thin, and keeping pace with both AI innovation and AI deception requires constant adaptation. That’s why many companies are starting to lean on a community of partners — experts who can help navigate this complex landscape, from securing technical assessments to ensuring talent quality.
This is where PowerToFly comes in. With our Staff Augmentation services, you can strengthen your hiring process, access top-tier global talent, and ensure every candidate is vetted with both precision and protection in mind.
👉 Build your tech teams with global talent, local impact, and zero hiring headaches



