AI by 2025:
The urgent need for Training and Skilling
Contributors:
Allyn Bailey
Senior Director, SmartRecruiters
Julian Thomas
Managing Director of Digital Product, AMS
Rebecca Wettemann
CEO & Principal Analyst, Valoir
Generative AI (Gen AI) is an important part of the future of work but what does that mean for talent leaders?
Our experts agree that a new approach to skilling current and future employees in the ethical use of artificial intelligence is a must.
When ChatGPT made its debut in late 2022, the initial reaction was varied to say the least. Some technology experts predicted a bold new future of automation and innovation, business leaders predicted massive savings due to the elimination of huge swaths of jobs for millions of people while doomsayers predicted the end of the world when AI becomes self-aware. Thankfully, the more dire predictions haven’t materialized as organizations have proceeded with caution when it comes to handing the keys over to a tool that some critics have called “spicy autocomplete” in online memes.
Talent leaders and their human resource counterparts have taken a keen interest in using Generative AI in their operations with plans to expand its use by 2025. And they are taking steps right now to make the promise of Gen AI in the organization a productive and profitable reality. To achieve these goals, TA leaders will need to focus on AI training so that current and future employees have the right skills to use AI as effectively and as ethically as possible.
Unfortunately, most modern talent acquisition teams have not adequately addressed skilling for using AI in the coming year, according to “Is HR Ready for AI,” a new report from market research firm Valoir.
In a study of 150 global HR leaders, Valoir found that nearly one quarter of businesses have adopted some form of Gen AI for talent acquisition, making it the leading area for Gen AI adoption to date. Meanwhile, 30 percent plan to adopt it in the next 24 months. In terms of training employees for using AI, however, only 14 percent have an established training policy, and a shockingly low eight percent have a skills development program in place for workers whose positions could be made extinct by machine learning.
According to Rebecca Wettemann, CEO and principal analyst for the Arlington, Virginia-based Valoir, TA leaders and their HR counterparts need to establish policies that provide guardrails for the safe and effective use of AI for all employees including their recruitment teams.
“Given how rapidly the field is evolving, TA leaders are going to have to look beyond their traditional training strategies to deliver effective AI training that is tailored to individual roles and existing skills levels,” says Wettemann.
And today’s Gen AI tools can help create the training and skilling programs for new and current employees. “Talent leaders need to move to a skills-based approach — if they haven't already — to identify opportunities for retraining and upskilling as well as skills gaps,” adds Wettemann. “The good news (and irony) is that AI is well suited to helping them build dynamic skills taxonomies and map skills to learning and development opportunities.”
Gen AI Training Needs for Today and Tomorrow
TA leaders and their recruiting teams using machine learning is fairly old news. TA solutions have used AI to scour resumes, cover letters and job applications in recent years, but with the release of ChatGPT and similar tools that use large language models (LLMs), some recruiters are relying on Gen AI tools to craft interview questions and candidate emails, power chatbots that interact with candidates and guide them through the application and interview process, as well as establish that the hiring process complies with DEI mandates and current employment laws.
Along with this innovation, greater use of AI is coming to the hiring process and the training required may never stop, says Allyn Bailey, senior director of customer marketing for TA solution provider SmartRecruiters and a former global recruiter for Intel Corp.
“This technology isn't just tinkering around the edges; it's poised to fundamentally overhaul key aspects of the recruitment process,” says Bailey. “From recruitment marketing to candidate communication, assessment, and even the aggregation of hiring manager feedback, the potential for change is enormous.”
That said, Bailey says that machine learning’s most significant impact will come from businesses that don't just stop at implementing Gen AI solutions. “The real game-changers will be those that strategically integrate these AI capabilities with a wide range of automation solutions, creating a seamless, efficient, and highly effective recruitment ecosystem,” she says.
“This holistic approach is where the future of talent acquisition lies, and we're on the cusp of that exciting frontier,” adds Bailey.
Julian Thomas, Managing Director of Digital Product for AMS, paraphrases the advice that Uncle Ben gave Peter Parker in the origin story of Spider-Man: With great power comes great responsibility. And this means training employees with a new set of AI skills for the future.
“The training must focus on judgment and knowing the limitations of the technology. The people that are going to win are those who understand the limitations of this new technology as well as the opportunities,” he says. “The market is moving very fast and the technology's moving even faster, so we need good training.”
One key area for training is data. SmartRecruiters’s Bailey predicts that TA professionals who use Gen AI will have to become data experts or become more comfortable with gathering and analyzing reams of information.
“In the Gen AI-driven hiring landscape, recruiters need to become maestros of tech, data, ethics, and experience design,” she says. “It's about getting smart with automation tools to streamline TA operations, diving deep into data analytics for sharper insights, and navigating the ethical maze of AI with a clear compass.”
This new data generated by AI can be analyzed for greater insights into the organization and the people who use these tools. “While we have reports on time to hire and things like that, AI is applying a large language model to the structured data we generate. This will make a much more natural way of reading insight from our process and the data we generate,” says Thomas.
One of those insights is the candidate experience in the recruitment process, or what Bailey calls the “secret sauce” in creating “journeys” through the hiring process that resonate with candidates and TA teams alike to make every touchpoint meaningful with AI.
“Savvy recruiters will weave these elements together, creating a recruitment symphony that's both efficient and human-centric,” she says. “As companies catch the beat, those leading the charge in blending these skills will not just fill positions but will shape the future of work itself. It's time to remix recruitment with a blend of tech, heart, and art.”
Avoiding AI Hallucinations and Other Traps
But not all is wine and roses when using current ChatGPT and similar tools. Ask anyone who has used ChatGPT to write their obituary; the results are often ludicrous. The current batch of machine-learning tools have a propensity to generate responses that are flat-out false when given low-quality data; improper or misleading prompts; or unique interpretations of different AI systems. These are called “AI Hallucinations” when a Gen AI tool fabricates answers from whole cloth.
And there are real-world repercussions when AI Hallucinations occur. ChatGPT, for example, inferred that an Australian politician was guilty in a bribery scandal when he was in fact the whistleblower. In the summer of 2023, a U.S. judge fined two New York lawyers who submitted a legal brief that included six fabricated case citations generated by the same Gen AI platform. This illicit action came at a price: a fine of $5,000.
“TA leaders need to be thinking about training beyond the technical use of AI in areas like critical thinking, so employees will be prepared to evaluate and interpret recommendations that AI delivers,” advises Wettemann.
Thomas agrees. “TA teams also need measurement of the effectiveness and outcomes of their work. We need to establish controls to understand the efficacy and ensure everyone is using it in the same way,” he says. “The organizations that do that will be successful.”
Right now, challenges have emerged in adopting policies and practices for AI training. Valoir found that the biggest hurdle inside organizations is a lack of AI skills and expertise (26 percent), followed by the fear of risk or compliance issues (23 percent), and lack of resources or budget (22 percent).
Training for Using AI Ethically
Employees using AI must also be trained in its ethical use, especially in terms of achieving DEI mandates and complying with government regulations. The current iteration of Gen AI may be a modern, cutting-edge tool that is less than two years old, employers cannot ignore anti-discrimination employment laws that have been on the book for decades. But the potential is there, warns AMS’ Thomas. These large language models (LLMs) based on public Internet content and data gathered inside an organization has the potential to, in his words, “institutionalize corporate biases.” In short, recruiters using AI may inadvertently end up interviewing and recruiting people who look like them and share their backgrounds.
At the same time, recruiters must train for the day when recruiters will allow Gen AI to make hiring decisions, even if this reality could be years in the future.
AI will indeed take the reins in hiring decisions, predicts Bailey. “While we're often hesitant to sideline human judgment, the reality is that humans bring biases, delays, and inconsistencies to the table,” she says. “AI, on the other hand, offers a path to more objective, efficient, and streamlined recruitment processes.”
Despite what Bailey calls “popular resistance” to the idea that a machine will eventually hire human beings, she believes that the future of hiring leans heavily towards AI-driven decisions. “The question isn't if AI will lead in hiring, but when and how we'll adapt to this shift,” she says.
Thomas agrees that it would be naive for TA leaders to assume that Gen AI hiring new workers won't happen in the near future in the same way that medical AI tools won't diagnose illnesses and prescribe medications. Not only will AI hire new workers, Thomas believes it will decide which AI technology to deploy inside an enterprise.
“Will AI recruit people? If it does, it'll be because it's trusted, understood and measured to be more effective than a human recruiter. But while it isn't measured and known to be more effective right now, then it shouldn’t do so,” he says.
“This is one reason that recruiters should be trained and not currently hand over the hiring of a candidate to Generative AI, no matter how smart it appears to be,” advises Thomas.
Ultimately, employees and especially TA teams must be trained in the potential power of Gen AI as a productive and potentially disruptive form of workplace technology. At the same time, everyone who trains employees in using this new gear must not lose sight of the people it is designed to serve.
“This is really about putting human factors first and understanding that while there's plenty of potential benefit from AI, there's also a lot of fear and it's not all unfounded,” warns Wettemann. “Rather than taking a technology-first generative AI strategy, leaders will need to take a human-first approach, giving employees both the guardrails and incentives they need to maximize value and minimize risk from AI.”
written by Phil Albinus and reviewed by Catalyst Editorial Board
with contribution from:
Training for Using AI Ethically
Employees using AI must also be trained in its ethical use, especially in terms of achieving DEI mandates and complying with government regulations. The current iteration of Gen AI may be a modern, cutting-edge tool that is less than two years old, employers cannot ignore anti-discrimination employment laws that have been on the book for decades. But the potential is there, warns AMS’ Thomas. These large language models (LLMs) based on public Internet content and data gathered inside an organization has the potential to, in his words, “institutionalize corporate biases.” In short, recruiters using AI may inadvertently end up interviewing and recruiting people who look like them and share their backgrounds.
At the same time, recruiters must train for the day when recruiters will allow Gen AI to make hiring decisions, even if this reality could be years in the future.
AI will indeed take the reins in hiring decisions, predicts Bailey. “While we're often hesitant to sideline human judgment, the reality is that humans bring biases, delays, and inconsistencies to the table,” she says. “AI, on the other hand, offers a path to more objective, efficient, and streamlined recruitment processes.”
Despite what Bailey calls “popular resistance” to the idea that a machine will eventually hire human beings, she believes that the future of hiring leans heavily towards AI-driven decisions. “The question isn't if AI will lead in hiring, but when and how we'll adapt to this shift,” she says.
Thomas agrees that it would be naive for TA leaders to assume that Gen AI hiring new workers won't happen in the near future in the same way that medical AI tools won't diagnose illnesses and prescribe medications. Not only will AI hire new workers, Thomas believes it will decide which AI technology to deploy inside an enterprise.
“Will AI recruit people? If it does, it'll be because it's trusted, understood and measured to be more effective than a human recruiter. But while it isn't measured and known to be more effective right now, then it shouldn’t do so,” he says.
“This is one reason that recruiters should be trained and not currently hand over the hiring of a candidate to Generative AI, no matter how smart it appears to be,” advises Thomas.
Ultimately, employees and especially TA teams must be trained in the potential power of Gen AI as a productive and potentially disruptive form of workplace technology. At the same time, everyone who trains employees in using this new gear must not lose sight of the people it is designed to serve.
“This is really about putting human factors first and understanding that while there's plenty of potential benefit from AI, there's also a lot of fear and it's not all unfounded,” warns Wettemann. “Rather than taking a technology-first generative AI strategy, leaders will need to take a human-first approach, giving employees both the guardrails and incentives they need to maximize value and minimize risk from AI.”
written by Phil Albinus and reviewed by Catalyst Editorial Board
with contribution from:
Allyn Bailey
Senior Director, SmartRecruiters
Julian Thomas
Managing Director of Digital Product, AMS
Rebecca Wettemann
CEO & Principal Analyst, Valoir