Question and Answer Chain With Source Documents Output

A node for answering questions based on retrieved documents, with an option to include source documents in the output for enhanced traceability and transparency.

Overview

This node implements a question-and-answer chain that retrieves relevant documents to answer user queries. It uses a language model combined with a retriever component to find contextually relevant information and generate answers based on that content. Optionally, it can include the source documents used for answering in the output, which enhances transparency and traceability.

Common scenarios where this node is beneficial include:

  • Building AI-powered knowledge bases or help desks where users ask questions and receive answers grounded in specific documents.
  • Automating customer support by retrieving relevant documentation snippets to respond accurately.
  • Research assistance tools that provide answers supported by cited sources.

Practical example: A user inputs a question about a product feature; the node retrieves related product manuals or FAQs and generates an answer referencing those documents. If enabled, the source documents are also returned so the user can verify the answer's origin.

Properties

Name Meaning
Save time with an example of how this node works A notice field linking to an example workflow demonstrating the node’s usage.
Query The question or query string to be answered. Depending on the node version, this defaults to different input JSON fields (input, chat_input, or chatInput).
Prompt Source (User Message) Selects the source of the prompt text: either automatically from a connected Chat Trigger node's chatInput field or defined manually via expression or static text. Available in versions ≥1.3.
Text From Previous Node When using automatic prompt source, this is the actual text input taken from the previous node's chatInput field (versions ≥1.4).
Text When defining the prompt manually, this is the user-provided text or expression containing the query (shown when "Prompt Source" is set to "Define below").
Return Source Documents Boolean flag indicating whether to include the retrieved source documents alongside the answer in the output.
Options → System Prompt Template A customizable template string for the system prompt that guides the language model’s behavior. It should include {context} for the retrieved context and {question} for the user’s query. Defaults to a safe fallback prompt instructing the model not to guess if unsure.

Output

The node outputs an array of items, each with a json property containing:

  • response: The generated answer string produced by the language model based on the retrieved documents.
  • documents (optional): An array of source documents used to generate the answer, included only if "Return Source Documents" is enabled. These documents provide traceability and allow verification of the answer's basis.

No binary data output is produced by this node.

Dependencies

  • Requires a connected AI language model node providing the language generation capability.
  • Requires a connected retriever node responsible for fetching relevant documents based on the query.
  • Uses LangChain libraries internally for prompt templating and chain execution.
  • No direct external API keys or credentials are specified here, but the connected language model and retriever nodes may require appropriate authentication configured separately in n8n.

Troubleshooting

  • Empty Query Parameter: If the query parameter is empty or undefined, the node throws an error stating "The ‘query’ parameter is empty." Ensure the input data contains the expected fields or that manual prompt text is provided.
  • Incorrect Input Field Names: Different node versions expect different input JSON field names (input, chat_input, or chatInput). Verify that the incoming data matches the expected field for your node version.
  • Missing Connected Nodes: The node requires exactly one connected language model and one retriever node. Errors may occur if these connections are missing or misconfigured.
  • Prompt Template Issues: Customizing the system prompt template incorrectly (e.g., omitting required variables like {context} or {question}) may lead to poor or nonsensical answers.
  • Performance Considerations: Large document sets or complex queries may increase execution time. Adjust retriever settings or limit document size as needed.

Links and References

Discussion