Overview
This node provides custom integration with AWS S3, enabling users to perform various file and folder operations on their S3 buckets. It supports actions such as uploading, downloading, copying, moving, and deleting files, as well as creating and deleting folders and buckets. The node is useful for automating workflows that involve managing files in AWS S3 storage, such as backing up data, processing files, or organizing cloud storage.
Practical examples include:
- Uploading processed images or documents to a specific S3 bucket.
- Downloading files from S3 for further processing in a workflow.
- Moving or copying files between folders or buckets within S3.
- Creating new folders or buckets dynamically based on workflow logic.
- Deleting obsolete files or folders to manage storage costs.
Properties
Name | Meaning |
---|---|
Provider | The cloud provider to use; default is "aws". |
Region | The AWS region where the S3 bucket or resource is located. Options include all standard AWS regions like "us-east-1" (N. Virginia), "eu-west-1" (Ireland), "ap-southeast-1" (Singapore), etc. |
Access Key ID | The access key ID credential required to authenticate with AWS. |
Secret Access Key | The secret access key credential required to authenticate with AWS. This is a password-type field for security. |
Custom Endpoint | Optional custom endpoint URL for connecting to an S3-compatible service other than AWS. |
Additional properties related to file and folder operations are included but not detailed here since the focus is on the Default Resource and Operation.
Output
The node outputs JSON data representing the result of the performed operation. For example:
- On successful upload, move, copy, create, or delete operations, it returns an array with success status objects.
- On download operations, it outputs the downloaded file's content in binary form under a specified binary property name, along with metadata such as MIME type.
- For folder and bucket operations, it returns relevant success information about the created or deleted resources.
If binary data is involved (e.g., downloading a file), the node prepares the binary data properly so it can be used downstream in the workflow.
Dependencies
- Requires valid AWS credentials: Access Key ID and Secret Access Key.
- Needs network access to AWS S3 endpoints or a compatible custom endpoint.
- Uses internal helper functions for interacting with AWS S3 API, including streaming uploads and downloads.
- No external npm packages beyond those bundled with n8n are explicitly required.
Troubleshooting
- Downloading a directory is not supported: Attempting to download a folder by specifying a folder key ending with "/" will throw an error. Users must specify exact file keys for downloads.
- Authentication errors: Invalid or missing AWS credentials will cause authentication failures. Ensure correct Access Key ID and Secret Access Key are provided.
- Region mismatches: Specifying an incorrect region may lead to resource not found errors. Verify the region matches the location of your S3 bucket.
- File size limits: Large files should be handled carefully; the node uses streaming to upload/download but very large files might require additional configuration.
- Permission issues: Operations may fail if the AWS credentials do not have sufficient permissions for the requested action (e.g., deleting files, creating buckets).
- Custom endpoint misconfiguration: When using a custom endpoint, ensure it is correctly formatted and accessible.
Common error messages are wrapped and thrown as node operation errors with descriptive messages to help identify the issue.