Actions20
- Assistant Actions
- Embedding Actions
- File Actions
- Report Actions
- Text Actions
- Thread Actions
Overview
This node provides a classification feature using Large Language Models (LLMs) under the "Embedding" resource with the "LLM Based Classify" operation. It classifies a given input text into one of several user-defined categories by leveraging an LLM model such as GPT-4o Mini or others available via OpenAI APIs.
Typical use cases include:
- Automatically categorizing customer feedback, support tickets, or survey responses into predefined topics.
- Sorting documents or messages based on content themes.
- Enabling branching workflows in n8n based on classification results for dynamic automation paths.
For example, you could input a product review text and classify it into categories like "Positive", "Negative", or "Neutral" using an LLM to understand sentiment and context beyond simple keyword matching.
Properties
Name | Meaning |
---|---|
Authentication | Choose which API credentials to use: either dedicated OpenAI Analytics API credentials or existing OpenAI API credentials. |
Target Text | The text string that you want the LLM to classify. |
Categories | A list of categories (strings) that the LLM will use to classify the target text. |
Model | The specific LLM model to use for classification. Options are dynamically loaded from available completion models (e.g., gpt-4o-mini, gpt-4o). |
Response Format | The format of the response returned by the LLM: either plain text (category name only) or structured JSON including category, confidence, and reasoning. |
Use Branch Outputs | Boolean flag to enable branching in the workflow based on the classification result. If enabled, different branches can be triggered depending on the category assigned. |
Output
The node outputs a JSON object containing the classification result. Depending on the selected response format:
- Text: The output JSON contains the category name as a simple string.
- JSON: The output JSON includes a structured object with fields such as:
category
: The assigned category name.confidence
: A confidence score or probability indicating how sure the LLM is about the classification.reasoning
: Explanation or rationale provided by the LLM for the classification decision.
If branching is enabled, the node also supports routing the workflow execution along different paths based on the classification outcome.
The node does not output binary data.
Dependencies
- Requires valid API credentials for OpenAI services, either through dedicated OpenAI Analytics API credentials or standard OpenAI API credentials.
- The node dynamically loads available LLM models from the OpenAI API.
- Network access to OpenAI endpoints is necessary.
- No additional external dependencies beyond OpenAI API access.
Troubleshooting
- Credential Errors: If the node fails to authenticate, ensure that the correct API key credential is selected and properly configured in n8n. Check for expired or revoked keys.
- Model Loading Issues: If no models appear in the dropdown, verify network connectivity and API permissions.
- Empty or Invalid Categories: Each category must be a non-empty string. Providing empty categories will cause errors.
- Input Text Missing: The "Target Text" property is required; leaving it empty will cause the node to fail.
- API Rate Limits: Hitting OpenAI rate limits may cause errors; consider adding retry logic or upgrading your plan.
- Branching Not Working: If branching is enabled but no branches trigger, verify that the workflow has branches configured matching the category names exactly.
Links and References
- OpenAI API Documentation
- OpenAI Models Overview
- n8n Documentation - Creating Custom Nodes
- LLM Classification Concepts
This summary is based solely on static analysis of the provided source code and property definitions without runtime execution.