Roleplay AI v0.1.3 icon

Roleplay AI v0.1.3

Interact with AI models for roleplaying scenarios

Overview

The Roleplay AI node enables interactive roleplaying conversations with various AI chat models (such as OpenAI, Anthropic, or Google Gemini) by sending structured prompts and receiving model-generated responses. It is designed for scenarios where you want to simulate a character in a conversation, complete with personality, scenario context, and custom formatting.

Common use cases:

  • Creating immersive chatbot experiences for games or storytelling.
  • Simulating customer support or training scenarios.
  • Generating creative dialogues between fictional characters.
  • Prototyping conversational AI applications with rich context.

Example:
You can configure the node to act as a medieval knight, provide a scenario (e.g., "defending a castle"), and have it respond to user messages in-character, following specific formatting guidelines.


Properties

Name Type Meaning
Use Custom Model boolean Whether to specify a custom model identifier instead of selecting from a list.
Model Name or ID options Select an AI model from the available list (if not using a custom model).
Custom Model Identifier string Enter a custom model name/ID (e.g., "gpt-4-turbo", "claude-3-opus") if using a custom model.
Character Name string The name of the character being roleplayed. Required.
Include User Name boolean Whether to include a custom user name in the conversation.
User Name string The user's name in the roleplay (defaults to "you" if not specified).
Character Description string A description of the character's role and personality. Required.
First Message string The initial message from the character to start the conversation. Required.
Include Scenario boolean Whether to include scenario details in the prompt.
Scenario string Description of the roleplay scenario (used if "Include Scenario" is true).
Include Message Example boolean Whether to include example messages in the prompt.
Message Example string Example conversation format (used if "Include Message Example" is true).
Include Model Preset boolean Whether to include preset instructions for the model.
Model Preset string Instructions for model behavior (used if "Include Model Preset" is true).
Include Format Guidelines boolean Whether to include response formatting guidelines.
Format Guidelines string Guidelines for how the AI should format its responses (used if "Include Format Guidelines" is true).
Chat Summary string Optional summary of the chat history, inserted after the first message.
Chat History json JSON-formatted array of previous chat messages (role, content, optional name).
Include Other Pre-Message boolean Whether to include additional pre-message instructions.
Other Pre-Message string Additional instructions before the conversation starts (used if "Include Other Pre-Message" is true).
User Message string The message sent by the user to the chat model. Required.
Include Raw Output boolean Whether to include the full raw message data (all prompt and response messages) in the output.
Temperature number Sampling temperature for the model (controls randomness).
Additional Fields collection Advanced model parameters: Frequency Penalty, Max Tokens, Presence Penalty, Top P.

Output

The node outputs a single object per input item with the following structure:

{
  "response": "<AI generated reply>"
}
  • response: The text reply generated by the AI model, trimmed of whitespace.

If Include Raw Output is enabled, the output will also contain:

{
  "raw_data": [
    // Array of all prompt and response messages exchanged, including roles and names
  ]
}
  • raw_data: An array representing the full conversation context sent to and received from the AI, including system, assistant, and user messages.

Dependencies

  • External Service: Requires access to a Roleplay AI API endpoint compatible with OpenAI, Anthropic, or similar providers.
  • API Key: Must be configured via n8n credentials (roleplayAIApi), including:
    • apiKey: Your API key for the selected provider.
    • baseUrl: The base URL of the API endpoint.
    • providerType: (Optional) Provider type, e.g., "openai", "anthropic".
  • n8n Configuration: Ensure the credentials are set up in n8n under the correct credential type.

Troubleshooting

Common Issues:

  • Missing API Key:
    Error: "No valid API key provided"
    Resolution: Check that your credentials are correctly set up in n8n.

  • Invalid Model Selection:
    Error: "Failed to load models: ..." or "No valid models found in API response"
    Resolution: Verify your API key and endpoint; ensure your account has access to the requested models.

  • Malformed Chat History:
    Error: "Invalid chat history format: ..."
    Resolution: Ensure the "Chat History" property contains valid JSON—an array of objects with at least role and content fields.

  • Invalid Response Format:
    Error: "Invalid response format from AI API"
    Resolution: The API did not return the expected structure. Check the compatibility of your API endpoint and model.

  • General API Errors:
    Error: Any error message returned from the API will be surfaced in the output if "Continue On Fail" is enabled.


Links and References

Discussion