A serverless distributed rendering system for Blender animations using AWS Batch, S3, and Lambda. This project enables parallel frame rendering of Blender projects in the cloud using AWS Fargate containers.
The system consists of four main components:
A Go application that runs in a Docker container to render individual Blender frames.
- Downloads
.blendfiles from S3 - Renders specified frames using Blender's Cycles engine
- Uploads rendered frames back to S3
- Built on
accetto/ubuntu-vnc-xfce-opengl-g3base image with Blender 4.1.1
A Go application that consolidates rendered frames into a ZIP archive.
- Runs after all frames are rendered
- Downloads frames from S3 folder
- Creates ZIP archive and uploads to S3
- Lightweight Ubuntu-based container
An AWS Lambda function that orchestrates the rendering pipeline.
- Triggered by S3 upload events (
.blendfile uploads) - Reads frame count from S3 object tags
- Submits AWS Batch array job for parallel frame rendering
- Submits dependent ZIP job to run after rendering completes
Pulumi-based infrastructure as code that provisions all AWS resources.
- S3 bucket for storing blend files and rendered frames
- ECR repository for Docker images
- AWS Batch compute environment (Fargate)
- Job queue and job definitions
- Lambda function with S3 event triggers
- IAM roles and policies
- Upload a
.blendfile to the S3 bucket with aframetag indicating the number of frames - Lambda function is triggered automatically
- Lambda creates an AWS Batch array job with one task per frame
- Each Batch task renders a single frame in parallel using Fargate
- After all frames complete, a ZIP job consolidates the output
- Rendered frames and ZIP file are available in S3
- Go 1.x
- Node.js and pnpm
- Docker
- AWS CLI configured
- Pulumi CLI
- Amber (for build scripts)
.
├── blender-renderer/ # Frame rendering container
│ ├── src/
│ ├── Dockerfile
│ └── scripts/
├── zip-frames/ # Frame zipping container
│ ├── src/
│ ├── Dockerfile
│ └── scripts/
├── initateAwsBatch/ # Lambda function
│ └── src/
├── infra/ # Pulumi infrastructure
│ └── src/
└── decisionDocs/ # Architecture decision records
git clone <repository-url>
cd blender-render-farmcd infra
pnpm install
cd ../initateAwsBatch
pnpm installcd infra
pulumi upNote the outputs from Pulumi, including:
- S3 bucket name
- ECR repository URL
- Lambda function ARN
For blender-renderer:
cd blender-renderer
npm run docker:build
# Tag and push to ECR repository
docker tag blender-renderer:latest <ecr-repo-url>/blender-renderer:latest
docker push <ecr-repo-url>/blender-renderer:latestFor zip-frames:
cd zip-frames
npm run docker:build
# Tag and push to ECR repository
docker tag zip-frames:latest <ecr-repo-url>/zip-frames:latest
docker push <ecr-repo-url>/zip-frames:latestUpload your .blend file to the S3 bucket with a frame tag indicating the total number of frames to render:
aws s3 cp animation.blend s3://<bucket-name>/animation.blend \
--tagging "frame=120"Check AWS Batch console to monitor rendering jobs:
- Each frame renders as a separate array task
- Jobs run in parallel based on available compute capacity
All rendered frames will be stored in S3 at:
s3://<bucket-name>/animation.blend/<frame-files>
A ZIP file containing all frames will also be created:
s3://<bucket-name>/animation.blend/frames.zip
Download the ZIP file:
aws s3 cp s3://<bucket-name>/animation.blend/frames.zip ./rendered-frames.zipblender-renderer:
- Set via AWS Batch job command overrides
-blend: S3 key of the blend file-bucket: S3 bucket name- Frame number: Automatically set by AWS Batch array index
zip-frames:
-bucket: S3 bucket name-folder: S3 folder path containing frames
initateAwsBatch Lambda:
BUCKET_NAME: S3 bucket for storageJOB_DEFINITION_ARN: Blender renderer job definitionJOB_QUEUE_ARN: AWS Batch job queueZIP_JOB_DEFINITION_ARN: ZIP job definition
- Compute: 4 vCPUs, 8GB RAM per rendering task (Fargate)
- Timeout: 15 minutes per frame
- Platform: x86_64 Linux
- Blender Version: 4.1.1
- Render Engine: Cycles
See decisionDocs/ for architecture decision records, including:
- Why the Docker entrypoint was changed from
/usr/bin/tinito/start(prevents Fargate tasks from hanging)
- Fargate costs based on vCPU and memory per second
- S3 storage and data transfer costs
- Lambda invocations (minimal)
- Consider using Spot instances for cost savings (requires modification)
- Check AWS Batch compute environment status
- Verify ECR images are pushed and accessible
- Review IAM permissions for ECS task execution role
- Check CloudWatch logs for the specific array task
- Verify
.blendfile is accessible in S3 - Ensure frame tag is correctly set
- Verify all rendering jobs completed successfully
- Check job dependencies in AWS Batch console
ISC