Skip to content

Navigating Generative AI For Customer Contact In 2024

3 minute read

By Brian Manusama, Executive Partner at Actionary

Without a shred of a doubt, 2023 belonged to the rise of Artificial Intelligence, and particularly to Generative AI (GenAI). As we walked into 2024, organisational challenges loomed on the horizon where executives found themselves teetering on the edge of a double-edged sword, torn between the fear of missing out (FOMO) on the transformative potential of GenAI and the nagging apprehension about the potential pitfalls in its adoption.

As I look back at 2023, it all reminds me of my early days at Gartner in 2014 covering the conversational AI market. Like in 2014, in this eagerly awaited era of GenAI, organisations were thrilled with the promise of unparalleled advancements in automation and decision-making. However, as they delved deeper into its adoption, a haunting sense of déjà vu began to emerge – reminiscent of the pitfalls encountered during the early days of chatbot implementation.

At first, the allure of GenAI’s capabilities seems irresistible. Companies envision seamless interactions, improved customer contact productivity, and enhanced customer experiences. The echoes of past successes with chatbots fuels optimism, but soon, familiar challenges resurface. Communication breakdowns and misunderstandings become the norm as GenAI, despite its sophistication, struggled to comprehend nuanced human interactions. Users experience frustration, mirroring the early days of chatbots that left customers feeling unheard and dissatisfied. Just think about the latest DPD chatbot and the Microsoft Tay chatbot in March of 2016.

THE QUESTION NOW IS WHAT TO DO?

On one hand, whispers of success stories from early adopters filled boardrooms, tempting decision-makers with the allure of unprecedented efficiency and innovation. The FOMO virus spread like wildfire, infecting the corporate psyche as visions of competitors gaining a significant edge danced before their eyes.

Yet, on the other hand, a lingering fear gnawed at the back of their minds – the fear of poor guidance and unforeseen consequences. The spectre of misguided decisions and unintended outcomes haunted the decision-makers, causing a collective hesitation to plunge headlong into the GenAI revolution.

In the many strategy sessions I witnessed over the year, heated debates ensued. The executives grapple with the paradoxical challenge of embracing innovation while cautiously navigating the uncharted waters of artificial intelligence. I believe that those who tread the fine line between FOMO and cautiousness will find themselves on a path of measured success. The key lays in embracing innovation while staying vigilant, ensuring that the transformative power of GenAI harnesses responsibly, and the fear of poor guidance needs to be mitigated through a combination of strategic planning and adaptability.

Realising the parallels between GenAI and the chatbot pitfalls, organisations need to take a step back. Lessons from the past spur a renewed emphasis on customer-centric design, robust ethical guidelines, and a harmonious integration of human intelligence with artificial capabilities.

Ultimately, the story of GenAI should echo the cautionary tale of chatbots, reminding organisations that technological advancements must be approached with a mindful awareness of past challenges. The key lay not just in the adoption of cutting-edge tools but in learning from history to navigate the complexities of human-AI interaction successfully.

COMMON CHALLENGES

Here are the top 5 common challenges I have seen organisations face when embracing conversational AI in their customer contact environment:

  1. Lack of understanding and education

Lack of understanding among stakeholders about what AI can and cannot do. Misaligned expectations and insufficient knowledge about AI capabilities can lead to unrealistic goals and disappointment.

  1. Data quality and bias

AI systems still heavily rely on data for training, and if the data used is of poor quality or biased, it can result in skewed and inaccurate outcomes. Bias in AI algorithms can perpetuate and even exacerbate existing societal inequalities.

  1. Security concerns

AI systems can be vulnerable to cyber threats and attacks. Malicious actors may exploit vulnerabilities in AI algorithms, leading to data breaches, unauthorised access, or manipulation of AI-driven processes.

  1. Overreliance on AI

Excessive reliance on AI without proper human oversight can lead to complacency. Human judgment and intervention are essential, especially in situations requiring empathy, creativity, and complex problem[1]solving that AI may struggle to replicate.

  1. Unintended consequences

Implementing AI systems without thorough consideration of potential unintended consequences can lead to unexpected outcomes. From social ramifications to unforeseen impacts on business processes, organisations must anticipate and address these issues proactively.

Successfully navigating these pitfalls requires a holistic approach, encompassing education, ethical considerations, transparency, and a commitment to continuous improvement throughout the AI adoption journey.

 

This article was originally published in Engage Magazine.
Download your free copy here.

Keep up to date with the latest events, resources and articles.

Sign-up for the Engage Customer Newsletter.