A recent significant distrust among job applicants regarding the use of artificial intelligence (AI) in the recruitment process. Only 26% of candidates believe that AI will fairly evaluate their applications, while over half (52%) acknowledge that AI is involved in screening their information. This growing scepticism reflects deeper concerns about bias and the legitimacy of job opportunities.
The survey, conducted in the first quarter of 2025 and involving 2,918 job candidates, indicates that 32% of respondents worry about AI potentially failing their applications. Additionally, 25% expressed a decreased trust in employers who utilise AI for evaluations. Alarmingly, only half of the candidates felt confident that the jobs they were applying for were legitimate.
Interestingly, while candidates are wary of AI's role in recruitment, many are using the technology to enhance their applications. According to a separate survey from the fourth quarter of 2024, 39% of candidates admitted to using AI tools during the application process. They primarily employed AI for generating résumé content (54%), cover letters (50%), and responses to assessment questions (29%).
The survey notes that the proliferation of AI complicates the evaluation of candidates' true abilities and identities. This concern is compounded by the issue of candidate fraud; a second-quarter 2025 survey found that 6% of respondents confessed to engaging in interview fraud, such as impersonating someone else. Gartner predicts that by 2028, a staggering one in four candidate profiles could be fake.
This environment of uncertainty may lead to candidates being more selective in their job applications. The latest data shows that only 51% of candidates accepted a job offer during their most recent application process, a sharp decline from 74% in 2023.
To combat candidate fraud while retaining trust, employers are encouraged to implement a multi-layer fraud mitigation strategy. Key steps include:
- Setting Clear Expectations: Employers should communicate their hiring standards and define acceptable AI usage while highlighting their fraud detection measures.
- Using Assessments Wisely: Recruiters must be trained to detect evasive behaviours, and assessments should incorporate safeguards against cheating, such as in-person interviews.
- Refining Evaluation Methods: Fraud prevention should extend beyond the initial hiring phase, employing risk-based data monitoring and identity verification tools throughout the recruitment process.
As organisations navigate the complexities of AI in hiring, addressing these concerns will be crucial for fostering a trustworthy and effective recruitment environment.