Articles on: Hugo AI Agent & Chatbot

Troubleshooting & Common questions about Hugo AI Agent

Whether you are just getting started with Hugo or looking to improve your existing configuration, you may have questions or encounter issues. This article is here to help you receive some first aid on the most commonly asked questions.


Hugo is a next-generation AI support agent solution, fully integrated with Crisp.

While conversational at its core, Hugo goes beyond answering questions. He understands customer intent and context to proactively guide users toward resolution — just like a human support agent.


Built to autonomously handle a large share of incoming conversations, Hugo can also perform real actions through your integrations, such as checking account details, tracking orders, or issuing refunds, always within the rules you define.

By engaging proactively and providing concrete guidance, Hugo helps resolve issues faster and frees up your team to focus on complex cases.


Haven't met Hugo yet? Check out official resources



General Information


While Hugo is aimed at being intuitive to navigate and configure, we have many guides and resources to help you better take advantages of the existing features.

Below, you'll find a compilation of deep-dive articles on certain of these features, which contain helpful information. Don't hesitate consulting them to learn more about Hugo 🤖


Beginner articles:


Hugo Features:


Workflows guides:


Find many more guides and examples in our "Automations & AI Agents" category



Testing, Reproducing & Debugging Hugo behaviours in the Playground


Inside of AI Agent → Evaluate → Playground, you will find a dedicated interface to directly test and interact with Hugo.


The Playground offers a few option to test Hugo without affecting your production setup:

  • Compatibility mode (Visible if Hugo is disabled in the "Activation" menu) → Allows you to choose whether to test Hugo itself, or your actual workflow-driven setup
  • Model → Try and compare different AI models to decide which one fits your needs best
  • Answer Guidance → To select the level of certainty Hugo should have in order to send a response


Once you're ready, you can then interact with Hugo from the dummy chatbox available on the right.


This is not only useful to evaluate different models or to test how Hugo behaves, but also to analyze and reproduce actual user conversations!

The Playground is the main way for you to understand why Hugo provided a certain response, and adjust your instruction, training resources, selected model, etc...

You can easily reproduce user exchanges by copy/pasting the user's question from theit conversation into the Playground interface, and access certain useful information.


As you interact with Hugo in the Playground, you can open the Guardrail drawer below the chatbox.

The Guardrails allow you to get insight on Hugo's thinking and the action he has taken. There, you can directly visualize the query he used to search through your resources, the worklow he triggered, or the action he performed.



This allows you to see what Hugo looks for, but also visualize the resources he actually used to generate his responses. It helps you not only to understand how Hugo thinks and what he searches (to adjust the way your redact your content or your instructions), but also the actual resources he used, to help you which one could be worth improving, or if new resources should be created.



Common Questions about Hugo AI Agent


In this section, we've compiled some of you most frequently asked questions.

Don't hesitate going through them all to find loads of insights on how to properly take advantage of all Hugo features.



Hugo is not finding an answer for an easy question, what can I do?


Most often, this is often because he couldn't find a reliable source in your training data.

We would recommend creating Question & Answer snippet to answer the most commonly asked questions, this is a very effective source of data for Hugo, think of them as an internal FAQ for your AI Agent 🤖


If you already have sources that should answer that question, review them to make sure they are formatted correctly: Questions & Answer snippets shouldn't address Hugo or give him instructions, they should just answer the question in a similar way an FAQ or an article would, by addressing the user (as if you were answering them).


You can also reproduce the user's question into the Playground and check the "Guardrail" to see what Hugo searched for, this can help you adjust your existing sources, or the terminology used in your Question & Answers for instance.


Finally, ensure that you don't have Instructions that would cause confusion or hesitation for that kind of question/situation.



Hugo is hallucinating and giving a wrong response, can this be corrected?


Hugo is designed and fine-tuned to rely on your training data as the source of truth.

Generally, if Hugo is replying with erroneous information, this indicates that there may be conflicting or missing training sources.


We recommend using the Playground to replay that user's question and opening the "Guardrails" to see Hugo's thought process, as well as the sources he searched for and used to generate his response.

This can help you see which sources are being problematic to adjust them, and to create new ones when relevant.



Hugo is not following the instructions I gave him, why?


Instructions should be used to define Hugo's behaviour and tone, or to give him clarifications, context and instructions about handling specific situations.

However, Hugo (and AI Agent in general) are inherently tasked with specific objectives under the hood. You can adjust many aspects to tune Hugo to for your business's identity, but keep in mind that AI Agents are AI first, and display certain native patterns.


Also be wary about overly verbose instructions and their amount.

The more instructions Hugo has to keep track of, the more these can "diluted" and hard for Hugo to keep applying consistently across all of his responses.

For instance, if you have 30 different instructions, Hugo will attempt to fit his response to accomodate your 30 items check-list. In some cases, he may take shortcuts.

We recommend focusing on important instructions, and not wasting too much effort and instruction length to adjust minor AI-specific behavioural aspect.


When prompting Hugo with instructions, ensure that they are clear, imperative and direct. Avoid leaving too much room for interpretation and ambiguity.



Does the language I use to write my instructions and routing rule matter?


Not particularly, Hugo is able to understand them even if the user's language is different.

Though if you audience is even somewhat international, we recommend using English for your prompts.



Hugo is costing more credits than I expected, why is that and how can I reduce it?


Hugo can consume different amount of credits depending on several parameters:

  • The model used → matters a lot, you can compare their relative costs inside of the hugo "Billing" section
  • The length of the conversation → the number of exchanges Hugo had with your users
  • The size of the context → While more minor, long instructions and exchanges requiring longer tasks or searches can lead to slightly higher generation costs


There are a few ways you can mitigate this:

  • You can switch AI model, to try lower-cost ones → These are generally quite excellent and able to easily deal with your most commonly asked questions
  • You can fine-tune your routing → to prevent Hugo from handling certain conversations he shouldn't, or which are not necessarily suited to AI Agents
  • You can improve your training resources → The better your training resources, the quicker and easier it can be for Hugo to find information which lead to a resolution



Does the Playground consumes credits?


It does yes. The Playground allows you to test and reproduce user questions as your users would in a real situation. It uses the same configuration and AI models as the ones available in production.

Since our infrastructure yields costs to maintain, and AI generation is billed by the LLM provider you selected, AI usage inside of the Playground is also billed to reflect that.



Is Hugo trained on all the conversations on my Crisp inbox, including other user's messages?


No, Hugo is only aware of the current on-going conversation's context, not other ones.

On top of that, Hugo is trained on the data resources you provided him (Q&A snippets, Knowledge Base, Web content, File imports...), you can read more in this dedicated article about Hugo training.


Training Hugo directly on other user conversation would pose both privacy and security risks. It could lead to the AI re-using another user's confidential information, or sharing information mentioned by an agent in a very specific context (such as invoice details, discounts, security/software bugs...)


Directly training AI Agent on other user's data is therefore not only unsafe, but also against GDPR and most data privacy regulations.

This is something that Crisp & Hugo take very seriously, and we therefore do not collect nor use other user's conversation to improve that service or train your AI Agents.



Why did Hugo escalate one of the conversation without replying?


In some situations, Hugo may auto-escalate conversations directly to your human inboxes. This can for instance happen if:

  • A routing rule with the action "Inbox" triggered during the exchange
  • Hugo did not have find the answer to an email conversation (Hugo does not send escalation messages on Email conversations by default)
  • Hugo couldn't handle a specific conversation or message (different channel that the ones you selected, you ran out of AI credits...)
  • An issue occured (such as a temporary maintenance, or an issue with the LLM provider's API)

In case of an error, lack of credits, or if Hugo didn't have the response to an email, you will always see a private note in the conversation informing you so.


In such cases, attempt to replay the user's question inside of the Playground to analyze Hugo's actions from the "Guardrails" menu below the chatbox.

Sould an issue have occured and Hugo wasn't able to fallback to a different model, attempt selecting a different one temporary to verify if Hugo then starts handling conversations again.



Why is the crawling stopping at 2000 pages?


2000 pages is the limit of pages per domain that can be crawled on the Essentials and Plus plan.

This already represent a very significant amount of training data, but we understand that in some cases, you may have more content to feed Hugo.


You can use other training sources to add more content, but if you are attempting to crawl products or libraries of services/items/estates... There is a much more suited way to train Hugo on those data, thanks to MCP integrations. The next Q&A below is for you



How can I train Hugo on my products or items?


If you are looking to train Hugo on a catalog of products, items, estates, library of services, etc... Web crawling is generally not ideal, for several reasons.

Web content crawling aims at ingesting relatively static content: business and service information, blog posts, documentation or general business-related data.


However, products are dynamic by nature: they get updated, some are added, other removed, their price and stock can change depending on many parameter, etc.

Moreover, product content ios very often loaded dynamically, making it hard for crawlers to detect, and harder to keep updated.


They also add a lot of unnecessary overhead: imagine having a big catalog of products that you have to open and search through anytime you receive a question about those. Web crawling is far from ideal here.

Instead, we would need a way to organize it in an on-demand way, like querying a database.


This is exactly the kind of problems that MCP integration solve:

MCPs allow you to expose tools for Hugo to connect to your own services and database.


You can for instance expose tools to fetch user information and oprder status, get a specific product, or search through them to offer suggestions and recommandations to your users.

By using MCP, you can essentially allow Hugo to only request the exact data he needs (and to perform actions) while ensuring that he always get fresh, complete, and accurate information.


MCPs are also very easy to setup. You can learn more about them in this dedicated article.



I connected a MCP but it doesn't seem to work, what is the issue?


No worries, this is a normal step during testing and development, but nothing that cannot be resolved


First, verify that Hugo was able to connect to your MCP server inside of AI Agent → Automated → MCP & Integrations.

If Hugo cannot connect or fetch your tools, you may see an error there providing you extra insights on what the issue may be.


If the MCP is connected but not working in situations where Hugo should use them, start by identifying the root issue by testing it in the Playground with a question which should result in MCP tool calling:

  • If Hugo is not trying to use the tool at all → Ensure that your MCP server and tool description is properly set and fetched, and that it properly explains what the tool does, and when Hugo should use it
  • If Hugo is using the tool but passing the wrong data → Verify that you properly defined your input schema, which should explain Hugo what parameters & values he should provide when using it
  • If Hugo uses your tool correctly but his response suggests that an issue occured → There may be an issue or bug in your MCP's backend logic, which can require some investigation on your end.


In those situations, we strongly recommend to enable logs in your MCP server's backend, in order to analyze the requests sent by Hugo, how your servers processed it, and response it sent back to Hugo.

Logs will help you understand what really happened.


If your MCP server can be tested locally or from a dedicated platform, attempt to replay the request sent by Hugo to see how it behaves and pin-point the exact issue.

Updated on: 01/04/2026

Was this article helpful?

Share your feedback

Cancel

Thank you!