OCI GenAI icon

OCI GenAI

Interact with Oracle Cloud Infrastructure Generative AI service

Overview

This node integrates with Oracle Cloud Infrastructure (OCI) Generative AI service to generate text completions based on user-provided prompts. It supports multiple AI models, including Cohere and Meta LLaMA variants, allowing users to customize generation parameters such as temperature, maximum tokens, and sampling strategies.

Typical use cases include:

  • Automating content creation like blog posts, summaries, or emails.
  • Generating conversational responses for chatbots.
  • Experimenting with different AI models for natural language generation tasks.
  • Enhancing applications with AI-driven text completions tailored by prompt and model choice.

For example, a user can input a prompt describing a product, select a specific model, and receive a generated marketing description in return.

Properties

Name Meaning
Compartment ID The OCID of the compartment where the GenAI model will be used.
Model The AI model to use for text generation. Options include:
- cohere.command-a-03-2025 v1.0
- cohere.command-r-08-2024 v1.7
- cohere.command-r-plus-08-2024 v1.6
- meta.llama-3.1-405b-instruct
- meta.llama-3.1-70b-instruct
- meta.llama-3.2-90b-vision-instruct
- meta.llama-3.3-70b-instruct
- meta.llama-4-maverick-17b-128e-instruct-fp8
- meta.llama-4-scout-17b-16e-instruct
- xai.grok-3
- xai.grok-3-fast
- xai.grok-3-mini
- xai.grok-3-mini-fast
Prompt The text prompt to send to the language model.
Temperature Controls randomness of output; lower values make the model more deterministic (range 0 to 2).
Max Tokens Maximum number of tokens to generate in the response (minimum 1).
Top P Controls diversity via nucleus sampling; value between 0 and 1.
Top K Filters vocabulary to the K most likely tokens; only applicable for certain Cohere models.

Output

The node outputs an array of JSON objects corresponding to each input item. Each output JSON contains the generated text completion under a field named chatResult merged with the original input JSON data.

If an error occurs during processing an item and "Continue On Fail" is enabled, the output JSON for that item will contain an error field with the error message.

No binary data output is produced by this node.

Dependencies

  • Requires an API key credential for authenticating with OCI Generative AI service.
  • Needs proper OCI configuration including region and compartment ID.
  • Uses internal helper functions to obtain client instances and perform text generation requests.
  • The node depends on predefined model IDs and their associated API formats (e.g., COHERE, GENERIC).

Troubleshooting

  • Model Not Found Error: If the selected model ID is not in the predefined list, the node throws an error. Ensure the model ID matches one of the available options exactly.
  • Authentication Issues: Failure to authenticate with OCI may occur if the API key credential is missing or invalid. Verify credentials and permissions.
  • Invalid Parameter Values: Parameters like temperature, max tokens, topP, and topK must be within specified ranges. Providing out-of-range values may cause errors.
  • Network or Service Errors: Connectivity issues or service downtime can cause request failures. Check network access and OCI service status.
  • When "Continue On Fail" is enabled, errors per item are returned in the output JSON instead of stopping execution.

Links and References

Discussion