DeepSeek icon

DeepSeek

Consume DeepSeek AI

Actions2

Overview

This node integrates with the DeepSeek AI platform to generate text completions based on chat-style prompts. It is designed for scenarios where users want to interact with an AI language model by sending a sequence of messages (roles and content) and receiving generated responses. Typical use cases include building conversational agents, generating creative writing, summarizing text, or any application requiring natural language generation.

For example, a user can send a prompt consisting of multiple messages from different roles (system instructions, user queries, assistant replies) and receive a coherent AI-generated continuation or answer.

Properties

Name Meaning
Model The AI model used to generate the completion. Options are dynamically loaded from the API and filtered to those starting with "deepseek-".
Prompt A collection of messages forming the conversation history or input prompt. Each message has:
- Role: one of Assistant, System, or User.
- Content: the text content of the message.
Simplify Whether to return a simplified version of the response containing only the main data (choices array) instead of the full raw API response. Defaults to true.
Options Additional optional parameters to customize the completion request:
- Frequency Penalty: Penalizes repeated tokens (range -2 to 2).
- Maximum Number of Tokens: Max tokens to generate (up to 32768).
- Presence Penalty: Encourages new topics (range -2 to 2).
- Sampling Temperature: Controls randomness (0 to 1).
- Top P: Nucleus sampling parameter (0 to 1).
- Response Format: JSON object specifying output format.
- Logprobs: Boolean to return log probabilities of output tokens.
- Top Logprobs: Number of top tokens with log probabilities to return (requires Logprobs=true).

Output

The node outputs JSON data under the json field. By default (when "Simplify" is enabled), the output contains a data property holding the array of choices returned by the DeepSeek API, which represent the generated completions or responses.

If "Simplify" is disabled, the node returns the full raw response from the API, including metadata and other details.

No binary data output is indicated in the source code.

Dependencies

  • Requires an API key credential for authenticating with the DeepSeek AI service.
  • The base URL for API requests is set to http://host.docker.internal:1234/v1 by default, which may need adjustment depending on deployment.
  • The node depends on the DeepSeek API endpoints /models (to load available models) and the chat completion endpoint (not explicitly shown but implied).

Troubleshooting

  • Common issues:

    • Invalid or missing API key will cause authentication failures.
    • Selecting a model not starting with "deepseek-" might result in no options or errors.
    • Providing malformed messages or empty content could lead to unexpected API errors.
    • Setting incompatible options (e.g., top_logprobs without enabling logprobs) may cause request rejection.
  • Error messages:

    • HTTP errors from the API are ignored by default (ignoreHttpStatusErrors: true), so users should check the output carefully.
    • If the response does not contain expected properties (like choices), it may indicate an issue with the request payload or API availability.

Links and References

Discussion