DeepSeek icon

DeepSeek

Consume DeepSeek AI

Actions2

Overview

The node provides an interface to DeepSeek AI's FIM (presumably a text generation or completion model) resource, specifically supporting the "Complete" operation. It sends a prompt to a selected AI model and receives generated completions based on that prompt. This is useful for scenarios such as generating code snippets, writing assistance, content creation, or any task requiring AI-generated text completions.

Practical examples:

  • Auto-generating code based on a partial snippet or description.
  • Creating creative writing prompts or story continuations.
  • Summarizing or expanding user input text dynamically.

Properties

Name Meaning
Model The AI model used to generate the completion. Only models whose IDs start with "deepseek-coder" are selectable.
Prompt The input text prompt for which the AI will generate completions.
Simplify Whether to return a simplified version of the response containing only the relevant completion data instead of the full raw API response.
Frequency Penalty Penalizes new tokens based on their existing frequency in the generated text to reduce repetition. Range: -2 to 2.
Maximum Number of Tokens Limits the maximum number of tokens generated in the completion. Most models support up to 2048 tokens; newer ones up to 32,768.
Presence Penalty Penalizes new tokens based on whether they appear in the text so far, encouraging the model to talk about new topics. Range: -2 to 2.
Sampling Temperature Controls randomness of completions. Lower values make output more deterministic and repetitive; higher values increase randomness. Range: 0 to 1.
Top P Controls diversity via nucleus sampling. A value of 0.5 means half of all likelihood-weighted options are considered. Recommended to adjust either this or temperature but not both simultaneously.
Echo Prompt Whether to include the original prompt text in the output along with the completion.
Logprobs Whether to return log probabilities of each output token in the completion.
Suffix Text appended after the generated completion.

Output

The node outputs JSON data representing the completion results from the DeepSeek API. If the "Simplify" option is enabled, the output is simplified to contain only the data field with the array of completion choices returned by the API. Otherwise, the full raw response is available.

No binary data output is indicated.

Example simplified output structure:

{
  "data": [
    {
      "text": "Generated completion text here",
      "index": 0,
      "logprobs": null,
      ...
    }
  ]
}

Dependencies

  • Requires an API key credential for authenticating with the DeepSeek AI service.
  • The base URL for API requests is set to http://host.docker.internal:1234/v1 by default, which may need adjustment depending on deployment.
  • The node depends on the DeepSeek API being accessible and the availability of models starting with "deepseek-coder".

Troubleshooting

  • Common issues:

    • Invalid or missing API key leading to authentication errors.
    • Selecting a model that does not exist or is unavailable.
    • Exceeding token limits causing request failures.
    • Network connectivity issues to the configured API endpoint.
  • Error messages:

    • Authentication errors typically indicate invalid credentials; verify and update the API key.
    • Model not found errors suggest selecting a valid model from the dropdown.
    • Token limit exceeded errors require reducing the max tokens parameter.
    • Timeout or connection refused errors indicate network or server availability problems.

Links and References

Discussion