Chat Console

The chat console is where you pressure-test your bot before exposing it to customers. Ask real questions, inspect citations, and refine your content until the answers feel trustworthy and on-brand.

What the chat console is for

Think of the chat console as your staging environment for answers. It's not a general AI playground – it's a focused view on how your bot responds when it's only allowed to use your own content.

  • Try the exact questions your customers actually ask.
  • Compare how answers change as you add or update documents.
  • Check that citations point to the right parts of your content.
  • Confirm tone, level of detail, and “politeness” feel right for your brand.

What you'll see

The console is intentionally simple so you can focus on the answers:

  • Bot selector – choose which bot you're testing. Each bot has its own configuration and knowledge base.
  • Conversation history – messages are shown in order with clear separation between user and bot.
  • Answer citations – when enabled, answers include a list of document snippets used to generate the response.
  • Composer – where you type questions. It supports multi-line input and keyboard shortcuts.

How to test effectively

The fastest way to improve quality is to test with real-world scenarios, not toy questions.

  • Use actual emails or support tickets you've received and paste them in as questions.
  • Deliberately ask vague or messy questions – customers rarely phrase things perfectly.
  • Check that sensitive topics are handled cautiously and refer to the right policies.
  • If an answer is off, adjust the source document instead of fighting with prompts.

Reading citations

Citations exist so you can trust – and verify – what the bot is saying.

  • Each answer includes references to one or more documents and a short snippet of the relevant section.
  • If a citation doesn't look right, that's usually a sign the source document needs tightening or splitting.
  • If there are no citations, the bot may be falling back to generic behaviour – treat that as a red flag and review your content coverage.

If an answer looks wrong

FAQBot is constrained by your content. When something feels off:

  1. Check the citations – did it pull from the right place?
  2. Open the source document and improve the wording or structure.
  3. Re-run the question in the console after the document is reprocessed.

Over time you'll end up with better internal documentation and more consistent answers for both humans and the bot.