Overview
The node integrates with the DeepSeek AI platform to generate text completions based on chat-style prompts. It is designed for scenarios where users want to interact with an AI language model by sending a sequence of messages (roles and content) and receiving generated responses. This can be useful for building conversational agents, automating customer support replies, generating creative writing, or any application requiring natural language generation.
For example, a user might send a prompt consisting of a system instruction and a user question, and receive an assistant-generated answer. The node supports specifying the AI model to use, controlling output randomness, token limits, penalties to influence repetition or topic shifts, and more.
Properties
Name | Meaning |
---|---|
Model | The AI model used to generate the completion. Options are dynamically loaded from available models starting with "deepseek-". |
Prompt | A collection of messages forming the conversation history. Each message has: - Role: one of Assistant, System, or User. - Content: the text content of the message. |
Simplify | Whether to return a simplified version of the response containing only the main data (choices array) instead of the full raw API response. Defaults to true. |
Frequency Penalty | Number between -2 and 2 that penalizes new tokens based on their existing frequency in the text so far, reducing verbatim repetition. |
Maximum Number of Tokens | The maximum number of tokens to generate in the completion. Most models support up to 2048 tokens; newer ones up to 32,768. Default is 16. |
Presence Penalty | Number between -2 and 2 that penalizes new tokens based on whether they appear in the text so far, encouraging discussion of new topics. |
Sampling Temperature | Controls randomness of output from 0 (deterministic) to 1 (most random). Lower values produce less random completions. |
Top P | Controls nucleus sampling diversity from 0 to 1. For example, 0.5 means half of all likelihood-weighted options are considered. Usually adjusted together with temperature but not both simultaneously. |
Response Format | JSON object specifying the desired format of the model's output. |
Logprobs | Boolean indicating whether to return log probabilities of output tokens. If true, the response includes log probabilities for each token. |
Top Logprobs | Integer 0–20 specifying how many of the most likely tokens' log probabilities to return at each position. Requires Logprobs to be true. |
Output
The node outputs JSON data representing the AI model's completion results. By default (when "Simplify" is enabled), the output contains a data
field holding the array of choices returned by the model. Each choice typically includes generated text and possibly other metadata depending on the model.
If "Simplify" is disabled, the full raw response from the DeepSeek API is returned, which may include additional fields beyond just the choices.
The node does not output binary data.
Dependencies
- Requires an active connection to the DeepSeek AI platform.
- Needs an API key credential configured in n8n for authentication with DeepSeek.
- The base URL for API requests is set to
http://127.0.0.1:1234/v1
by default, which should be updated to the actual DeepSeek API endpoint in production.
Troubleshooting
- Invalid Model Selection: Selecting a model not starting with "deepseek-" will likely cause errors. Ensure the model dropdown is populated correctly and a valid model is chosen.
- API Authentication Errors: Missing or invalid API credentials will prevent successful requests. Verify that the API key credential is properly configured.
- Token Limit Exceeded: Setting
maxTokens
too high may exceed model context length limits, causing errors. Adjust to supported token limits per model. - Incompatible Parameters: Using
top_logprobs
without enablinglogprobs
will cause errors. Make surelogprobs
is true iftop_logprobs
is set. - Network Issues: Since the default base URL points to localhost, ensure the DeepSeek service is running locally or update the base URL to the correct remote endpoint.
Links and References
- DeepSeek API Pricing and Models
- OpenAI-like Completion Parameters Explanation (for understanding parameters like temperature, top_p, penalties)