Overview
This node generates vector embeddings using the Ollama platform's embedding models. It is designed to connect to a vector store, allowing users to transform input data into numerical vector representations that can be used for similarity search, clustering, or other machine learning tasks involving vector data.
Common scenarios include:
- Creating embeddings for documents or text snippets to enable semantic search.
- Preprocessing data for AI workflows that require vector inputs.
- Integrating with vector databases or stores to index and query data efficiently.
For example, you might use this node to convert customer feedback text into embeddings, then store those vectors in a vector database to find similar feedback entries quickly.
Properties
Name | Meaning |
---|---|
This node must be connected to a vector store. Insert one | A notice indicating that this node requires a connection to a vector store node to function properly. |
Model | The specific Ollama embedding model to use for generating embeddings. Models can be selected from a dynamically loaded list fetched from the Ollama Models Library. Example: "llama3.2". |
Options | Additional optional settings for the node. |
Keep Alive | Duration to keep the loaded model in memory after use, which helps optimize performance for frequently used models. Format examples: "5m" (5 minutes), "1h30m" (1 hour 30 minutes). |
Output
The node outputs data on the ai_embedding
output channel with the name "Embeddings". The output contains the generated vector embeddings in JSON format. These embeddings represent the input data transformed into numerical vectors suitable for downstream AI or vector store operations.
No binary data output is indicated by the source code.
Dependencies
- Requires an active connection to a vector store node within the workflow.
- Requires valid credentials for accessing the Ollama API service, including the base URL and authentication token.
- Uses the
@langchain/ollama
package internally to interact with the Ollama embeddings API. - The node dynamically fetches available models from the Ollama API endpoint
/api/tags
.
Troubleshooting
- Connection Issues: If the node is not connected to a vector store, it will not function correctly. Ensure the vector store node is properly linked.
- Invalid Credentials: Errors related to authentication typically indicate missing or incorrect API credentials for Ollama. Verify that the API key or token is correctly configured.
- Model Not Found: Selecting a model that does not exist or is unavailable may cause errors. Refresh the model list or check the Ollama Models Library for valid options.
- Timeouts or Performance: If embeddings generation is slow, consider adjusting the "Keep Alive" option to keep the model loaded longer, reducing load times for frequent requests.
Links and References
- Ollama Models Library — Browse and download available embedding models.
- n8n Embeddings Ollama Node Documentation