Articles on: Hugo AI Agent & Chatbot

Troubleshooting & Common questions about Hugo AI Agent

This article helps you diagnose the most common Hugo issues without losing the useful context behind each answer.


Hugo is designed to be easy to configure, but AI behavior can still depend on several layers at once: training resources, instructions, routing, escalation settings, model choice, integrations, and the conversation channel. Use this guide when Hugo gives an unexpected answer, does not answer, escalates too early, or behaves differently in production than in your tests.


Haven't met Hugo yet? Check out official resources



Useful Hugo resources


If you are still configuring Hugo, start with the guides below before troubleshooting edge cases. Most unexpected behavior comes from missing knowledge, unclear instructions, or routing rules that are too broad.


Beginner resources:


Hugo feature guides:


Workflow resources:


Find more guides and examples in the Hugo AI Agent & Chatbot category.



Use the Playground to reproduce issues


The best place to debug Hugo is AI Agent → Evaluate → Playground. It lets you test Hugo in a safe environment, compare models, and inspect what Hugo searched for or decided to do.


The Playground is useful because it lets you:

  • Replay a user question → paste the same wording from a real conversation and compare the answer
  • Compare models → test whether the behavior is model-specific or configuration-specific
  • Adjust Answer Guidance → evaluate how strict Hugo should be before sending an answer
  • Open Guardrails → inspect search queries, selected sources, triggered workflows, and actions
  • Test safely → validate changes before deploying them to live conversations


Hugo Playground Guardrails drawer


When reproducing an issue, copy the user message as closely as possible. If the live conversation included earlier context, paste that context too, because Hugo answers based on the current exchange and may react differently if the test is missing important details.


The Guardrails drawer is especially useful when Hugo cannot answer or gives an answer you did not expect. It shows what Hugo looked for, which resources were used, and whether a workflow, routing rule, or tool call was involved. This usually tells you whether the fix should happen in training resources, instructions, routing, or integrations.



Common questions about Hugo


Still have questions which were not covered in this article? Here is a collection of the most frequently asked questions on this topic.


Hugo is not finding an answer for an easy question, what can I do?


Most of the time, Hugo does not answer because it cannot find a reliable enough source in your training data. The fastest fix is usually to create or improve a Question & Answer snippet for that recurring question. Think of Q&A snippets as Hugo's private FAQ: they should answer the question clearly, in the same way you would answer a customer.


If you already have a source that should answer the question, review how it is written. Training resources should be informative, factual, and customer-facing. They should not address Hugo directly, include behavior prompts, or mix too many unrelated topics in the same entry. Instructions belong in Guidance → Instructions; knowledge belongs in Train.


Then reproduce the question in AI Agent → Evaluate → Playground and open Guardrails. Check the query Hugo used, the sources it found, and whether the expected source appeared. If Hugo searched with different wording than your content uses, update your Q&A snippet or article to include the terms customers actually use.


Keep in mind that Hugo does not automatically learn from resolved conversations over time. For privacy and safety reasons, Hugo uses the resources you provide, the current conversation context, and the tools/integrations you authorize. If a question comes back often, add the missing knowledge manually instead of assuming Hugo will infer it from past conversations.


Hugo is hallucinating or giving a wrong response, can this be corrected?


Yes. A wrong answer usually points to one of three causes: missing information, conflicting information, or unclear instructions. Start by replaying the user's question in the Playground and checking Guardrails to see which sources were used.


If Hugo used an outdated or ambiguous source, update that source directly. If several resources contradict each other, remove or rewrite the weaker one so Hugo has a single source of truth. If the answer is important or frequently asked, create a dedicated Question & Answer snippet with the exact wording and policy Hugo should rely on.


If the issue only happens in production, compare the live conversation with your Playground test. Hugo may have received additional context, used a tool, followed a routing rule, or handled a different channel. Collect a recent conversation URL and the exact user message before asking support to investigate further.


Hugo is not responding, sends duplicate messages, or behaves inconsistently. What should I check?


First, test the same user message in AI Agent → Evaluate → Playground. If Hugo replies correctly there but not in the live conversation, the issue is likely related to activation, channel settings, routing, credits, workflow behavior, or a temporary provider/API issue rather than the answer itself.


Useful checks:

  • Activation → confirm Hugo is enabled in AI Agent → Agent → Activation and that the channel is selected
  • Credits → verify that your workspace has enough Hugo credits available
  • Routing → check whether a routing rule moved or escalated the conversation before Hugo answered
  • Workflows → review whether a workflow started, paused, or handed the conversation back to Hugo
  • Model behavior → switch models temporarily in the Playground to see whether the issue is model-specific
  • Conversation context → make sure an operator message, private action, or previous automation did not change the state


If the issue is recent and reproducible, gather a conversation URL and, when available, a support/debug link from AI Agent → Agent → Settings → Get help from support. This helps the team investigate the exact exchange instead of guessing from a description.


Does the language I use for instructions and routing rules matter?


Hugo can understand instructions and routing rules even when users speak another language. However, if your audience is international, writing prompts in English is usually the safest choice because it keeps your setup easier to review, compare, and maintain across teams.


What matters more than the language is the quality of the prompt. Instructions and routing rules should be clear, imperative, and specific. Avoid vague wording, long checklists, or prompts that try to cover too many unrelated cases at once.


Also remember that instructions are for behavior, not knowledge. Use them to define tone, boundaries, and response style. Use training resources to answer factual questions. If you put product facts or policies inside instructions, Hugo may treat them as behavior guidance instead of searchable knowledge.


Hugo is costing more credits than expected. Why, and how can I reduce usage?


Hugo credit usage depends on several factors: the selected model, conversation length, amount of context, number of messages, tool usage, and how often Hugo needs to search or reason before answering. Longer instructions, long exchanges, and complex tool-based tasks can also increase usage.


To reduce usage without lowering quality too much:

  • Compare models → some models are better suited for everyday support and cost less per answer
  • Improve training resources → focused resources help Hugo answer faster and with fewer retries
  • Use routing intentionally → avoid sending conversations to Hugo when they are clearly outside its scope
  • Keep instructions focused → long or overlapping instructions increase context and can reduce consistency
  • Review automation scope → make sure Hugo is enabled on the channels and cases where it actually brings value


The goal is not to make Hugo answer fewer useful questions, but to avoid unnecessary AI handling where a workflow, routing rule, or human handoff would be more appropriate.


Does the Playground consume credits?


Yes. Playground tests use the same Hugo infrastructure and AI models as production conversations. Since generation and evaluation still create AI usage, Playground tests consume credits as well.


That said, the Playground is still the best place to test changes before going live. A small amount of controlled testing usually costs less than letting an unclear setup create repeated wrong answers or escalations in production.


Can unused monthly Hugo credits be carried over to the next month?


No. Unused monthly Hugo credits do not roll over to the next month at the moment. Each month starts with the amount of credits included in your plan, and unused credits from the previous month are not saved.


For teams with variable support volume, it is worth reviewing usage from AI Agent → Billing and adjusting your setup around the topics, channels, and periods where Hugo brings the most value.


Is Hugo trained on all conversations in my Crisp inbox?


No. Hugo is aware of the current ongoing conversation, but it is not trained on all other conversations in your inbox. Hugo uses the training resources you provide, such as Q&A snippets, knowledge base articles, website content, files, and the current conversation context.


This is important for privacy and security. Training Hugo directly on unrelated past conversations could expose confidential information, reuse details from another customer's issue, or repeat something an agent said in a very specific context. Crisp and Hugo do not use other customers' conversation data to train your AI Agent.


If you want Hugo to improve on recurring questions, turn those learnings into clean training resources: write a Q&A snippet, update a knowledge base article, improve a website page, or connect a tool through MCP when fresh operational data is needed.


Why did Hugo escalate a conversation without replying?


Hugo may escalate directly when it determines that a human should take over, or when your configuration tells it to do so. This can happen when a routing rule triggers an Inbox action, when Hugo cannot confidently answer, when the channel is not enabled for Hugo, when credits are unavailable, or when a temporary provider/API issue prevents a safe answer.


Email conversations may also behave differently: when Hugo cannot answer an email conversation, it may escalate without sending an escalation message by default. In cases such as errors, missing credits, or failed provider calls, Crisp usually adds a private note explaining what happened.


To investigate, replay the user's message in the Playground and open Guardrails. Check whether Hugo searched your resources, triggered a workflow, followed a routing rule, or failed because of configuration. If a technical issue occurred, try another model temporarily and collect a recent conversation URL for support.


Why is website crawling stopping at 2,000 pages?


For Essentials and Plus plans, Hugo can crawl up to 2,000 pages per domain. This is already a large amount of training data, but some businesses have larger websites, catalogs, developer portals, or product libraries.


If you hit that limit, first decide whether all pages are actually useful for support. Use filters to crawl the most relevant paths and exclude pages that add noise. For content that is highly dynamic, such as product catalogs, inventory, prices, account details, or order status, web crawling is usually not the best source anyway.


For dynamic operational data, use an integration or MCP tool so Hugo can fetch fresh information on demand instead of searching through thousands of static pages.


How can I train Hugo on products, items, or dynamic data?


Web crawling works best for relatively stable content: documentation, policies, product explanations, service pages, blog posts, and general business information. It is not ideal for data that changes often, such as product availability, prices, inventory, order status, subscriptions, account details, or booking slots.


For dynamic data, MCP integrations are usually the better option. MCP lets you expose tools that Hugo can call when needed. Instead of training Hugo on a huge catalog that may become outdated, Hugo can request the exact information required at the moment of the conversation.


For example, an MCP server can let Hugo look up an order, fetch a customer's plan, search products, retrieve appointment availability, or perform a controlled action such as issuing a refund. This keeps the answer fresher, reduces training noise, and gives you more control over what Hugo can and cannot do.


I connected an MCP server but it does not seem to work. What should I check?


Start in AI Agent → Automate → MCP & Integrations and confirm that Hugo can connect to your MCP server and fetch its tools. If there is a connection or schema issue, the interface may show an error that points to the cause.


Then test a question in the Playground that should clearly require that tool. Open Guardrails and look at what happened.


Common MCP debugging paths:

  • Hugo does not try to use the tool → improve the MCP server and tool descriptions so Hugo understands when the tool should be used
  • Hugo uses the tool but sends the wrong parameters → review the input schema and make field descriptions more explicit
  • Hugo calls the tool correctly but the answer is wrong → inspect your backend logic, returned data, and error handling
  • The tool works locally but not through Hugo → check authentication, request signing, network access, and server logs


Enable backend logs while testing. Logs let you compare what Hugo sent, how your server processed it, and what your server returned. If possible, replay the same request locally or through your API testing tool to isolate whether the issue comes from Hugo's tool call or from your backend.


Updated on: 04/05/2026

Was this article helpful?

Share your feedback

Cancel

Thank you!