Package Information
Documentation
n8n-nodes-langfuse-ai-agent
This is an n8n community node that integrates with Langfuse to run AI agents with prompt management and tracing capabilities. It allows you to execute AI workflows using prompts stored in Langfuse, with automatic tracing and monitoring of your AI operations.
Langfuse is an open-source LLM observability and prompt management platform that helps you monitor, debug, and improve your AI applications.
n8n is a fair-code licensed workflow automation platform.
Installation
Operations
Credentials
Compatibility
Usage
Resources
Installation
Follow the installation guide in the n8n community nodes documentation.
Operations
AI Agent (Langfuse)
Executes an AI agent using prompts from Langfuse with structured output parsing and automatic tracing.
Inputs:
- Main: Any data you want to pass to the AI agent
- Chat Model: An AI language model connection (required)
- Output Parser: A structured output parser connection (required)
Outputs:
- Main: The structured output from the AI agent
Credentials
Langfuse API
To use this node, you need to set up Langfuse credentials:
- Sign up for Langfuse: Create an account at Langfuse Cloud or set up a self-hosted instance
- Get your API keys:
- Go to your Langfuse project settings
- Navigate to the API Keys section
- Create a new API key or use an existing one
- Configure credentials in n8n:
- Langfuse Host URL: Your Langfuse instance URL (default:
https://cloud.langfuse.com
) - Public Key: Your Langfuse public API key
- Secret Key: Your Langfuse secret API key
- Langfuse Host URL: Your Langfuse instance URL (default:
Usage
Setting up your workflow
- Add the Langfuse AI Agent node to your workflow
- Connect a Chat Model: Add an AI language model node (like OpenAI, Anthropic, etc.) and connect it to the "Chat Model" input
- Connect an Output Parser: Add a structured output parser node and connect it to the "Output Parser" input
- Configure the node:
- Prompt Name: Select a prompt from your Langfuse project (the node will automatically fetch available prompts)
- Prompt Parameters: Provide parameters for your prompt as a JSON object
Example workflow
Here's a simple example of how to use the Langfuse AI Agent node:
- Start with a trigger (e.g., Manual trigger)
- Add an AI model (e.g., OpenAI)
- Add an output parser (e.g., Structured Output Parser)
- Add the Langfuse AI Agent node and connect both the AI model and output parser
- Configure the node with your prompt name and parameters
Node Configuration
The node configuration panel shows:
- Prompt Name: Dropdown of available prompts from your Langfuse project
- Prompt Parameters: JSON object containing parameters for your prompt
Node Panel
The node panel displays:
- Input connections for Chat Model and Output Parser
- Output connection for the structured result
- Credentials configuration for Langfuse API
Example Prompt Parameters
{
"user_input": "{{ $json.user_message }}",
"context": "{{ $json.context }}",
}
Tracing and Monitoring
The node automatically:
- Traces all AI operations in Langfuse
- Logs prompts, responses, and metadata
- Provides observability into your AI workflows
- Enables debugging and performance monitoring
Resources
- n8n community nodes documentation
- Langfuse Documentation
- Langfuse API Documentation
- n8n AI Nodes Documentation