Over the past two years, generative AI has transformed how I work, becoming an invaluable ally. Yet beneath my appreciation lies concern. As AI adoption accelerates, I’ve witnessed an increasingly cavalier approach to implementation, particularly in sensitive domains like human resources.
This disconnect prompted me to interview four industry leaders about AI, algorithms, and talent acquisition. Their perspectives reveal both tremendous potential and significant challenges.
The Gold Rush Mentality
“There’s a gold rush mentality,” explains Robert Newry co-founder of psychometric assessment firm Arctic Shore. Companies are racing into implementation without considering the consequences.
“Some AI recruitment tools are available for as little as $6 per month,” Newry notes. “If you get a cheap tool, you’re not really going to get the best person for the job irrespective of their color, gender, background, or somebody who hasn’t been able to manipulate the program.”
Are we valuing efficiency over effectiveness and fairness?
Data Science vs. Social Science
“The data scientists don’t care about references or adverse impact,” Newry explains. “They’re just concerned with making accurate mathematical predictions. That’s fine if humans aren’t involved. It’s disastrous if humans are involved.”
Vinaj Raj, team founder of AI chatbot Chatsimple, describes AI through three elements: language (“a two-dimensional view”), science (adding “the three-dimensional factor of understanding”), and algorithms (the contextual layer that makes AI applicable).
Newry references Amazon’s cautionary tale, in which [the tech giant] abandoned an AI recruitment project after discovering it predominantly favored white male candidates from specific computing programs.
Gaming the System
As AI systems become more prevalent in hiring, candidates develop sophisticated methods to game them. Tools like LazyApply.com can blast out 5,000 automated job applications overnight. Alex Lee, founder of Kazka AI Consulting, describes how “people include information in their written CVs that is invisible to the human eye and can trigger the AI tool.”
Interestingly, Lee sees value in such ingenuity: “I would flag the prompt-injecting people and talk to them. You did something creative.”
Where AI Delivers Value
Despite challenges, our experts identify areas where AI delivers clear value.
Automating repetitive tasks “You can use AI to improve tasks that are repetitive and administrative but don’t require decision-making,” says Newry.
Enhancing productivity Employees working with AI “get four hours a week back,” Lee explains, allowing salespeople to “find better leads and build better relationships.”
Improving customer interactions Hao Sheng of Chatsimple explains how AI can address fundamental problems in customer service. He draws from eight years’ experience in conversational AI, noting that his work with contact centers revealed an extraordinarily high turnover rate—agents typically stay only 18 months. “This rapid turnover creates a perpetual cycle of hiring and training, resulting in operational costs that far exceed agent salaries,” he notes. AI can break this costly cycle by handling routine interactions while allowing human agents to focus on more complex, fulfilling work.
The 80/20 Rule in Recruitment
Raj proposes that AI can handle about 80% of initial screening but cannot replace human judgment for cultural fit. Newry agrees: “For high-stakes decisions, such as do you get a job, you can only use algorithms when you’ve thoroughly validated, tested, and monitored them.”
This suggests a future where AI assists rather than makes final decisions. (See related stories: LINKS)
Navigating Forward
Our experts emphasize four elements needed for responsible development.
Thoughtful regulation All four experts support regulation for AI, especially in high-stakes applications.
Containment mechanisms Lee emphasizes the ability to control AI systems when problems arise.
AI implementation strategy: Sheng advocates a gradual approach to building trust with AI systems. “It’s like self-driving cars—people will not fully trust them until they see people behind the wheel. It’s a gradual process where the AI builds trust with humans.” He recommends starting simple: “Don’t set your expectations too high. Let AI be your co-pilot before becoming autopilot.” This measured approach allows organizations to see clear ROI while gradually expanding AI capabilities.
AI literacy Raj advocates for AI education, while Lee identifies a common pitfall of “not providing training to employees. They ask AI one question, and it doesn’t give what they want, and they give up.”
Safety as Foundation
Sheng emphasizes that safety must be the cornerstone of AI development: “Safety is the most important thing we should have in place. Preventing AI from spreading false information or preventing AI from eventually outsmarting humans and getting out of control.” In other words, responsible AI isn’t just about efficiency, it’s about creating systems we can rely on and control.
Human Connection Remains Essential
Raj predicts that “In the next year, AI will pollute every aspect of our lives. In real life, human conversation will matter a lot.” This reinforces Lee’s observation that AI is “there to help drive decision-making, not to make decisions.”
6 Key Takeaways
- Balance efficiency with effectiveness: The cheapest AI solution rarely delivers the best outcomes, particularly in high-stakes contexts.
- Maintain human oversight: Human judgment remains essential for decisions affecting people’s lives.
- Invest in validation: Thorough testing and monitoring are non-negotiable for recruitment applications.
- Build trust gradually: As Sheng suggests, start with AI as a co-pilot before expecting autopilot capabilities.
- Prioritize safety and containment: Develop mechanisms to control AI systems as they become more autonomous.
- Foster AI literacy: Organizations that help employees collaborate with AI will see the most significant returns.
The future of AI isn’t about replacing humans but enhancing us—providing tools that free us from routine tasks while empowering better decisions. Success belongs to those who implement AI thoughtfully, with appropriate guardrails and a clear understanding of both capabilities and limitations.