AI – humans must remember the face behind the case

One of the primary challenges as we devolve more and more responsibility and decision-making capability to AI will be to remember the Face Behind the Case – that we are not dealing simply with caseload, but with real people with real concerns

Some of the brightest minds in the technologies of the future have been unusually vocal about the potential dangers of Artificial Intelligence (AI), with concerns ranging from simple job losses to world domination by heartless robots intent on wiping out mankind.

Others are more sanguine, pointing to profound benefits in sectors such as transportation, healthcare, finance and manufacturing, with a recent PwC report suggesting AI could add $15.7 trillion to the global economy by 2030.

Between these poles are countless businesses and organisations who are well aware that AI is coming, if indeed it is not already here, and who are trying to work out the best ways to deal with a seismic shift in the tectonic plates of the economic environment.

Every business will have to chart its own way through the perfect storm of new technology, which is becoming increasingly difficult to navigate, and try to harness its power in ways that are, on balance, more beneficial than harmful.

However, while AI will be – or, rather, is – able to examine massive volumes of data, find patterns, and make predictions or choices using algorithms, statistical models, and machine learning approaches, businesses must not lose sight of the fact that, ultimately, they are dealing with people (Human beings. Remember them?).

This is generating animated discussion in the sector in which we operate – the increasingly important complaints handling and case management systems for major companies, organisations and Ombudsman services, which are required by regulation to identify, classify, report and remediate all complaints.

There is no doubt that AI is increasingly enabling enterprises to gain a holistic view of complaints, which allows them to not only comply with external regulators but also understand the impact of complaints on their business.

What we have to work out now is the Balance of Effectiveness – that is, where do we draw the line in the sand of humanity? How can we effectively use AI to more efficiently help customers, and their customers, without dehumanising connections with people in their time of need?

We also have to get past the confusion between automation and AI. Automation is great for acknowledging complaints, allocating case identifiers and letting people know they’re in the system.

AI, on the other hand, has the capacity to interpret what complainants are saying, to look for the emotional content of the language they use and make the decision, based on that, whether or not to escalate the issue to a human contact. It learns continually, and remorselessly.

What it should not be used to do is to further keep people at arm’s length from an organisation. Consumers are wise to this as a tactic and, if someone is already upset about an issue, putting obstacles in their way will only make them incandescent.

This raises another interesting point. If AI is learning all the time from its interactions with people, will it start to assume that boiling rage, sarcasm and threats are the norm for humans? And, if so, what will it make of that?

These are all questions for which, at the moment, there are few concrete answers. We are all, to a greater or lesser extent, feeling our way into this brave new world which holds such potential blessings and such potential terrors.

One of the primary challenges as we devolve more and more responsibility and decision-making capability to AI will be to remember the Face Behind the Case – that we are not dealing simply with caseload, but with real people with real concerns.

Here, we could take a leaf from the Ombudsman community, with whom we as a company deal closely. It recognises that for every entitled, vocal and persistent complainant, there are as many vulnerable people whose contact with them can be a cry for help.

Their concerns must be dealt with in a humane fashion and, even if they are not valid or relevant, Ombudsman services put serious effort into directing such people towards agencies which can actually help them.

That requires genuine humanity, not artificial intelligence, because the legitimacy of a complaint in their jurisdiction needs to be upheld or “denied” without fear or favour whereas taking the time to deal gently and helpfully, arguably against their own productivity demands, is akin to taking a minute out of your busy day to help someone to cross a busy road.

Perhaps AI will eventually be able to act in such a compassionate way. Until then, it’s up to humans to remember the Face Behind the Case.

 

    Read more

    Latest News

    Read More

    How to shift company culture to capitalise on AI

    25 December 2024

    Newsletter

    Receive the latest HR news and strategic content

    Please note, as per the GDPR Legislation, we need to ensure you are ‘Opted In’ to receive updates from ‘theHRDIRECTOR’. We will NEVER sell, rent, share or give away your data to third parties. We only use it to send information about our products and updates within the HR space To see our Privacy Policy – click here

    Latest HR Jobs

    A highly successful CPO/People/HR Director that loves to help businesses unlock their potential and drive through people solutions that change lives. If yes, then this

    As the BBC Studios HR Director, you will have responsibility for a global HR team within commercial subsidiaries of the BBC, and lead and foster

    As the BBC Studios HR Director, you will have responsibility for a global HR team within commercial subsidiaries of the BBC, and lead and foster

    Are you an experienced Human Resources Manager? Are you experienced making and supporting strategic business decisions? I want to talk to you I am looking

    Read the latest digital issue of theHRDIRECTOR for FREE

    Read the latest digital issue of theHRDIRECTOR for FREE