Skip to content
CALM: Rasa's Answer to AI Hallucinations

October 31st, 2023

CALM: Rasa's Answer to AI Hallucinations

  • portrait of Kara Hartnett

    Kara Hartnett

Imagine you’re chatting with an AI assistant, expecting a straightforward answer, and suddenly it gives an answer that seems to come out of nowhere. This phenomenon, known in the tech world as an AI "hallucination," is more than just a quirky hiccup. It's those moments when our trusted digital sidekicks detour into the unexpected instead of providing the clarity we seek.

For businesses, this isn't just a minor glitch. It's a challenge that needs addressing, especially when customer trust is on the line.

This blog will explore why this happens and what it means for companies using AI for customer interactions. We'll also explore how Rasa’s new CALM technology ensures conversations remain hallucination-free, keeping conversations on track and delivering the reliable responses your customers expect.

Let’s break down the mystery of AI hallucinations

Ever wondered why sometimes, when chatting with an AI assistant, you get an answer that feels like it's from a parallel universe? Let's dive in to answer the question, “What are AI hallucinations?”

Neural networks are the brain behind the machine

Think of neural networks as the brain of our AI assistant. Picture a vast web, with nodes and connections buzzing with activity, much like our brain's neurons. These networks learn from data; just like us, they can sometimes get things wrong.

Imagine teaching a kid to recognize animals. If you only show them pictures of black cats, they might think all cats are black. Similarly, if an AI's training data is skewed or limited, it might develop quirky notions.

Deciphering out-of-context responses

Language isn't just about words; it's about context. The word "bat" could mean a flying mammal or something you hit a ball with. Here's where things get tricky for AI.

Modern AI uses attention mechanisms to focus on specific parts of an input. But if this spotlight shines on the wrong detail, the AI might misunderstand the context, leading to those incorrect responses.

Overfitting vs. generalizing

An effective AI system should balance applying past knowledge and being adaptable. But finding this balance isn't always easy. An overfitted AI is like a student who crams for an exam, memorizing facts without understanding them. It might cling too tightly to its training data, leading to responses that feel out of touch.

The continuous learning journey with feedback loops

Some AI systems always look for feedback, tweaking their responses based on user reactions. Think of it as training a pet. You reward good behavior and correct the bad. However, if the feedback is inconsistent, the AI might get confused, leading to unexpected conversational detours.

Multi-modal challenges

Today's AI isn't just text-savvy. It might interpret voice tones, images, or even videos. This multi-modal approach, while powerful, adds another layer of complexity. It's like trying to follow multiple conversations at a party. If the AI misreads one input type (say, misinterpreting a voice's tone), its corresponding response would be wrong.

What’s an example of an AI Hallucination?

Imagine a customer interacting with an AI assistant for a bank. The customer wants to inquire about the process of disputing a transaction.

Customer: "I noticed a strange transaction on my account. How do I dispute it?"

AI Assistant (Hallucinating): "To dispute a transaction, you need to visit your nearest branch with your transaction details and a valid ID. Our team will assist you further."

In this scenario, the AI assistant provides information that seems relevant at first glance but is actually incorrect. Most banks allow customers to initiate a transaction dispute online or over the phone, and visiting a branch is not required. This is an example of an AI hallucination where the assistant provides inaccurate information, potentially leading the customer to take unnecessary actions and causing frustration.

Another example is Google Bard’s catastrophic hallucination mistake, costing Google $100 billion in market value. Bard claimed that the “James Webb telescope took the very first pictures of ‘exoplanets’,” when in fact it was the European Southern Observatory's telescope that took those pictures.

Such hallucinations are problematic, primarily when users rely on AI for accurate information or when businesses use AI to interact with customers. Customers regularly mistype, such as 2032 instead of 2023. As such, you must ensure you have mechanisms to detect and correct such outputs to maintain trust and reliability.

What’s Rasa's solution to AI hallucinations?

In conversational AI, ensuring smooth and reliable interactions is paramount. Rasa's CALM, or Conversational AI with Language Models, offers a more intuitive and accurate conversational experience.

Traditional AI systems have leaned heavily on intents, essentially predefined categories to which user inputs are matched. While effective to an extent, this method can sometimes be too restrictive, leading to the aforementioned AI hallucinations. CALM takes a refreshing departure from this norm.

At the heart of CALM is Dialogue Understanding (DU). DU considers the entire conversation instead of merely focusing on the latest user input. It's like conversing with someone who recalls what was discussed a few minutes ago, ensuring continuity and context. This holistic approach allows CALM to capture layered meanings, clarify ambiguities, and genuinely comprehend the user's intent.

But understanding the user is just one side of the coin. CALM also ensures the conversation aligns with your business objectives. It strikes a delicate balance between being responsive to users and adhering to the business's predefined logic. This ensures conversations don't deviate from the core business logic while remaining dynamic and user-centric.

Rasa’s CALM technology ensures a safe user experience by allowing your conversational AI team to pre-define every response, eliminating the risk of AI hallucinations common in direct GPT connectors like ChatGPT. Unlike methods that generate responses on-the-fly from large language models, CALM maintains control and accuracy, addressing the significant risks of misinformation in business contexts.

Of course, real conversations aren't always linear. Users might digress, seek clarifications, or even correct themselves mid-way. CALM is adept at navigating these conversational nuances. It can adapt realistically, ensuring the dialogue remains coherent and contextually relevant.

Safety and reliability are cornerstones of any AI system, and CALM is no exception. One of its standout features is its robust design that prevents AI hallucinations. Users can trust CALM to provide accurate and reliable information, ensuring they always get clarity.

CALM isn't just another tool in the conversational AI toolkit. It's a transformative approach that rethinks how we interact with AI. By blending user understanding, business logic, and real-time adaptability, CALM promises smarter and more intuitive conversations.

Get ready for more reliable conversations

AI hallucinations are more than just quirky missteps. They shed light on the intricate challenges of designing flawless conversational AI experiences. However, these challenges can be addressed with the right tools and approaches.

Rasa's CALM technology is at the forefront of this evolution. By moving away from the limitations of traditional intent-based methods and embracing a comprehensive Dialogue Understanding, CALM ensures AI interactions align more closely with user expectations. It feels like engaging with someone who listens, comprehends, and remembers the conversation's nuances.

Explore further with Rasa

Curious about the transformative potential of CALM for your AI interactions? Dive deeper into Rasa's solutions and join a community shaping the next wave of conversational AI. Click here to learn more about Rasa's CALM technology.