Actions7
Overview
This node allows you to interact with the Azure OpenAI Chat Completions API via YOOV. Specifically, it enables you to create chat completions by sending a sequence of messages (as in a conversation) and receiving AI-generated responses. This is useful for building conversational agents, chatbots, virtual assistants, or any workflow that requires natural language understanding and generation.
Practical examples:
- Automating customer support conversations.
- Generating creative writing or brainstorming ideas.
- Building interactive Q&A bots within n8n workflows.
Properties
Name | Meaning |
---|---|
API Version | The version of the Azure OpenAI API to use. Options: 2023-07-01-preview, 2023-06-01-preview, 2023-05-15, 2023-03-15-preview, 2022-12-01, 2022-03-01-preview. |
Model in Current Deployment | The model deployed in your Azure OpenAI resource. Options: gpt-35-turbo, gpt-35-turbo-16k, text-embedding-ada-002. |
Messages | The list of messages forming the conversation so far. Each message has a Role (Assistant, System, User) and Content (the text). Used when "Use JSON for Messages" is disabled. |
Use JSON for Messages | If enabled, allows you to provide the messages as raw JSON instead of using the UI collection. |
Messages (JSON) | The messages to generate chat completions for, in JSON format. Only shown if "Use JSON for Messages" is enabled and for certain API versions. |
Options | Additional options for fine-tuning the completion behavior: - Max Tokens: Maximum number of tokens to generate. - Temperature: Sampling temperature (0–2). - Top P: Nucleus sampling probability (0–1). - Logit Bias: JSON object to modify token likelihoods. - User: End-user ID for tracking/rate-limiting. - N: Number of completions to generate (1–128). - Stream: Enable streaming responses. - Logprobs: Include log probabilities for top tokens (0–100). - Suffix: Text to append after completion. - Echo: Echo back the prompt. - Stop: Sequence indicating end of completion. - Completion Config: Custom config string. - Cache Level: Server-side caching level (0–2). - Presence Penalty: Penalize new tokens based on frequency (-2 to 2). - Frequency Penalty: Penalize repeated tokens (-2 to 2). - Best Of: Generate multiple completions server-side (max 128). - Model: Name of the model to use. |
Output
The node outputs a JSON object containing the response from the Azure OpenAI Chat Completions API. The structure typically includes:
{
"id": "unique-response-id",
"object": "chat.completion",
"created": 1680000000,
"model": "gpt-35-turbo",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "AI-generated response text"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 10,
"completion_tokens": 20,
"total_tokens": 30
}
}
choices
: Array of generated completions, each with an assistant message.usage
: Token usage statistics.- Other fields may be present depending on the API version and options used.
Note: The node does not output binary data.
Dependencies
- External Service: Requires access to Azure OpenAI service with appropriate deployment and models.
- API Key/Credentials: You must configure the
yoovAzureOpenAIApi
credentials in n8n, including the hostname and authentication details for your Azure OpenAI instance. - n8n Configuration: Ensure the node is properly authenticated and the correct endpoint is set in your credentials.
Troubleshooting
Common Issues:
- Invalid Credentials: If the API key or endpoint is incorrect, you may receive authentication errors.
- Model Not Deployed: If the selected model is not available in your Azure OpenAI deployment, the request will fail.
- Malformed Messages: Providing improperly formatted messages (especially in JSON mode) can result in parsing errors.
- Quota/Rate Limits: Exceeding Azure OpenAI quotas or rate limits will cause errors.
Error Messages:
"error": "Request failed with status code 401"
– Check your API credentials and permissions."error": "Model not found"
– Verify the model name and ensure it is deployed in your Azure resource."error": "Invalid JSON"
– Ensure your messages are valid JSON if using JSON input mode."error": "Too many tokens"
– Reduce the length of your input messages or lower the max tokens setting.