DeepSeek icon

DeepSeek

Consume DeepSeek AI

Actions2

Overview

This node integrates with the DeepSeek AI platform to generate text completions based on chat-style prompts. It is designed for scenarios where users want to interact with an AI language model by sending a sequence of messages (roles like user, assistant, system) and receiving generated responses. Typical use cases include building conversational agents, generating content, summarizing text, or any application requiring natural language generation.

For example, you can provide a conversation history as input messages and get the AI's next reply, or send system instructions to guide the AI's behavior in the completion.

Properties

Name Meaning
Model The AI model used to generate the completion. Options are dynamically loaded from available models starting with "deepseek-".
Prompt A collection of messages forming the prompt. Each message has:
• Role: one of Assistant, System, or User.
• Content: the text content of the message.
Multiple messages can be added and ordered.
Simplify Whether to return a simplified version of the response containing only the main data (choices) instead of the full raw API response. Defaults to true.
Frequency Penalty Number between -2 and 2 that penalizes new tokens based on their existing frequency in the text, reducing repetition.
Maximum Number of Tokens The maximum number of tokens to generate in the completion. Most models support up to 2048 tokens; newer ones up to 32,768. Default is 16.
Presence Penalty Number between -2 and 2 that penalizes new tokens based on whether they appear in the text so far, encouraging discussion of new topics.
Sampling Temperature Controls randomness of output. Lower values make output more deterministic and repetitive; higher values increase randomness. Range 0 to 1.
Top P Controls diversity via nucleus sampling. Value between 0 and 1 indicating the cumulative probability threshold for token selection. Recommended to adjust either this or temperature, not both.
Response Format JSON object specifying the desired format of the model's output.
Logprobs Boolean indicating whether to return log probabilities of output tokens. If true, the response includes token-level log probabilities.
Top Logprobs Integer 0–20 specifying how many of the most likely tokens' log probabilities to return at each position. Requires Logprobs to be true.

Output

The node outputs JSON data representing the AI completion response. By default (when "Simplify" is enabled), the output contains a data field holding the array of completion choices returned by the API. Each choice typically includes the generated text and related metadata.

If "Simplify" is disabled, the full raw response from the DeepSeek API is returned, which may include additional fields beyond just the choices.

The node does not output binary data.

Dependencies

  • Requires an API key credential for authenticating with the DeepSeek AI service.
  • The base URL for API requests is set to http://host.docker.internal:1234/v1 by default, which may need adjustment depending on deployment.
  • The node dynamically fetches available models from the /models endpoint of the API.

Troubleshooting

  • Common issues:
    • Invalid or missing API key will cause authentication failures.
    • Selecting a model that does not start with "deepseek-" might result in no options or errors.
    • Exceeding token limits (e.g., setting max tokens too high) may cause request rejections.
    • Using top_logprobs without enabling logprobs will likely cause errors.
  • Error messages:
    • HTTP errors from the API are ignored by default (ignoreHttpStatusErrors: true), but the node may still fail if the response is malformed.
    • Validation errors if required properties like model or prompt messages are missing.
  • Resolutions:
    • Ensure API credentials are correctly configured.
    • Use valid model names as listed by the node.
    • Adjust token and penalty parameters within allowed ranges.
    • Enable logprobs when using top_logprobs.

Links and References

  • DeepSeek API Pricing and Models
  • General concepts of language model parameters such as temperature, top_p, frequency_penalty, presence_penalty can be found in OpenAI documentation or similar LLM provider docs.

Discussion