The verdict is in: AI is poised to become a key driver of success in HR within the next five years. In fact, 70%* of business leaders think it will be transformative – and with good reason. From providing colleagues with unprecedented opportunities for career advancement, to offering more targeted upskilling, and enabling more rewarding work, HR teams around the world are about to experience an unprecedented boost.
This new realm of supercharged work isn’t risk-free. Nothing is. Trust remains a critical challenge to overcome, which means education and training are essential.
And it’s crucial we get this right. AI promises tangible upticks in productivity across the board which could help the UK shift up a gear when it comes to productivity. This boost is long overdue – since 2008, the UK has struggled to achieve notable productivity gains. According to POID’s report, current levels are 24% lower than they would have been had pre-2008 trends continued.
Let’s take a closer look at exactly how AI can help change the dynamic.
The AI imperative
Organisations clearly need to untap the opportunity AI offers. HR leaders will be essential in making it happen – as it’s up to these leaders to cultivate a culture of responsibility and understanding that enables AI adoption.
Such a culture will require internal initiatives like creating AI guidelines and upskilling opportunities so more people can use the technology, as well as transparency from third parties. In other words, AI approaches need to be open books, a policy we firmly believe in with our own approach publicly available.
But this is just the tip of the iceberg – how can HR leaders truly build AI trust in their companies so everyone can enjoy the many benefits it can offer?
How to build a culture of AI trust
Our global study Closing the AI Trust Gap found both leaders and employees believe in, and hope for, a transformation scenario with AI. However, doubts are particularly evident among employees. This can change if those employees are introduced to AI gradually, giving them ample time and a pace that lets them familiarise themselves.
If the goal is to radically accelerate adoption of AI within an organisation, leaders need to be clear about the tasks it’s being deployed to complete. HR is important in delivering this message, as well as the message that there’s a stark difference between the consumer generative AI tools that can cause hallucinations and enterprise-grade, workplace AI. It’s no wonder that some colleagues would be put off, or even afraid of, using AI if all they hear are stories of it going awry. They must be empowered and trained to use the enterprise-ready and trustworthy tools at their disposal to see the difference.
Consumer generative AI tools are trained on publicly-available data and possess different security measures. They are, of course, designed to work as well as possible, but the vast quantity of often conflicting data they ingest means their output can’t always be taken at face value. Enterprise solutions are far safer and accountable. This is largely because workplace AI systems are trained on specific company data so the output is relevant and not prone to hallucinations caused by irrelevant input data. When the system’s use cases are openly communicated – and the safety of using it’s clear – colleagues will see it as much safer.
Keeping humans in the loop
A human-in-the-loop approach, which prioritises human decision-making with AI’s support, drives safety and responsibility – and will be key to success.
Any AI rollout, no matter how pure its intentions, will not work if humans are removed from the decision-making process. Decision-making shouldn’t be delegated to AI as a rule, but it’s just as important that colleagues understand the organisation’s stance on the matter.
The AI tools that are deployed must be transparent in how they operate and all results must be explainable. Even if humans can’t analyse thousands of data points in a split-second the same way AI can, none of this rapid data processing is useful if we can’t explain how the results came to be. Ultimately, we shouldn’t be asking machines to decide for us but instead rely on them to make reasoned recommendations. This gives people time back to apply more unique judgement, creativity and nous. This amplifies human potential. It doesn’t negate it.
And HR has a responsibility to articulate this message, assuage concerns and advocate for the proper use of the technology, while clearly justifying why everyone should use it.
A big part of the AI trust gap is that not everyone across an organisation will see the technology and the motivations for using it in the same way. When the right principles are embraced and articulated by the department that sits between colleagues and management – HR – that can change. In other words, it’s the team closest to our people who will play one of the most important roles in empowering the workplace with AI. If they get the approach right, a more trusting, productive and engaged workplace awaits.
*Workday AI report with Forrester