AI in the contact centre
Reward and risk
In our rapidly evolving digital world, artificial intelligence (AI) is revolutionising many industries, including contact centres. While bots that triage web chats are already embedded in customer service, the emergence of large language models (LLMs) like ChatGPT has expanded the possibilities of AI in the contact centre. Although the buzz around the new technology is proving to be justified by the great rewards it’s bringing to both businesses and consumers, the immaturity of AI poses significant legal and ethical risks.
Unlocking potential rewards
For the contact centre, AI’s most dramatic rewards are brought to quality assurance and efficiency: areas where AI can unlock the true potential of your customer experience services.
Enhancing customer experience
Chatbots and virtual assistants powered by AI can provide instant responses, reducing wait times and increasing customer satisfaction. Large language models in chatbots enable a wider range of responses and automation of tasks than conventional virtual assistants.
Managing complex queries
Artificial intelligence can automate routine tasks, enabling contact centres to reserve human agents for handling complex issues. Voicebots offer emotional understanding when powered by LLMs to facilitate automated call handling; they are capable of natural-sounding conversations. Replacing traditional IVRs with AI technologies can elevate the customer experience and improve customer perception of your brand.
Personalising the interactions
Past interactions can be analysed by AI in real time to offer customised solutions, predicting optimal routes that improve the customer journey and personalise the engagement with the customer.
Harnessing predictive analytics
Real-time data analysis helps contact centres anticipate customer needs, identify opportunities, and proactively address issues.
Why settle for less when AI can give you 100%? Without AI, contact centres can review only a fraction of all voice interactions. But AI can be used to review every interaction, which can generate more accurate and more reliable insights into call activity. With high-quality reports on quality, management can make evidence-based decisions to improve service delivery.
Navigating legal and ethical risks
While AI brings great benefits, incorporating it into decision-making poses challenges.
Bias and hallucinations
‘Here’s What Happens When Your Lawyer Uses ChatGPT’ 
… bogus judicial decisions with bogus quotes and bogus internal citations … 
Large language models look for patterns and learn from the data they are supplied with, meaning that they inherit any biases present in that data, and in the absence of sufficient source data, they can generate hallucinations to fill the gaps (hallucinations is the term for fictitious content generated by AI). It’s easy to see that the consequences could be serious if AI-generated content were passed off as original material and not tested for accuracy. This scenario recently arose in a lawsuit against a US airline, where lawyers presented fake case citations as credible evidence when in fact the evidence had been hallucinated by ChatGPT. 
Erosion of human touch
Overreliance on AI risks diminishing the human element in customer engagement, crucial for empathy. Customer service automation, especially in sensitive areas, must be approached carefully. Artificial intelligence works best when used to complement human agents, to help them do their job. The businesses and vendors that realise this will gain the most rewards from AI.
Privacy laws, and legal accountability
Two businesses failed to abide by the law when obtaining data to train their AI models 
They violated America’s Electronic Privacy Communications Act by the collection and use of private data, as well as unlawfully intercepting communications between users and third-party services via integrations 
The effectiveness of AI relies on extensive access to data, raising privacy concerns. Protecting confidentiality and sensitive information is essential in contact centres. When assessing software vendors, your selection criteria should consider the strength of their data protection policies and information security management.
Microsoft will pay legal damages for customers sued for copyright infringement 
Whether accountability for erroneous AI decisions falls on the developers of AI or the companies using it is currently under debate. The recent announcement from Microsoft will go some way to easing the concern of users. However, it remains to be seen if this approach will be taken elsewhere.
Integrating AI into contact centres offers promise, but caution is necessary. Understanding data sources, the sharing of data, and the consequences to those when using AI is vital. AI can enhance processes but cannot fully replace human agents using intuitive software like Syntelate XA. At Inisoft, we are carefully considering new AI integrations to improve the agent and customer experience, while ensuring they satisfy legal requirements.
 Benjamin Weiser, ‘Here’s What Happens When Your Lawyer Uses ChatGPT‘, 27 May 2023, New York Times, available at https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt.html, accessed on 23 November 2023.
 Judge Kevin Castel of the Southern District of New York, quoted in ‘Lawyer apologizes for fake court citations from ChatGPT’, https://edition.cnn.com/2023/05/27/business/chat-gpt-avianca-mata-lawyers/index.html, accessed on 23 November 2023., CNN, Updated 3:28 PM EDT, 28 May 2023, available at