This project implements a simple asynchronous job queue system using FastAPI, Redis, and background worker processes.
The goal of the project is to demonstrate core backend systems concepts such as asynchronous task execution, queue-based architectures, fault tolerance, and separation of concerns — without unnecessary complexity.
The system allows clients to submit jobs via an HTTP API, process them in the background, and query their status independently of execution.
In synchronous systems, long-running tasks block request handling, leading to poor performance and unreliable behavior under load.
Real-world backend systems avoid this by decoupling job submission from job execution, allowing work to be processed asynchronously by background workers.
This project explores that design pattern in a minimal and approachable way.
High-level flow:
Client → FastAPI API → Redis Queue → Worker → Redis (Job Status)
-
FastAPI Accepts job submissions and exposes job status endpoints.
-
Redis Acts as an in-memory job queue and metadata store.
-
Worker A background process that executes jobs and handles retries.
Each component has a single responsibility, making the system easy to understand and extend.
POST /submit-job
Request body:
{
"task": "send_email"
}Response:
{
"job_id": "uuid"
}GET /job-status/{job_id}
Response:
{
"job_id": "uuid",
"task": "send_email",
"status": "pending | processing | completed | failed",
"retries": 1
}The API remains stateless and does not perform any background work directly.
Jobs are stored in Redis using two structures:
- A Redis list (
job_queue) that holds job IDs - Per-job metadata stored as JSON under keys like
job:{job_id}
This design keeps queue operations lightweight while allowing job state to be updated independently.
Redis is used for its atomic operations and simplicity, rather than being reimplemented manually.
Workers run as separate processes and continuously poll the Redis queue.
For each job, the worker:
- Marks the job as
processing - Executes the task
- Marks the job as
completedor retries on failure
To simulate real-world conditions, tasks may fail intermittently.
- Maximum of 3 retries per job
- Failed jobs are requeued until retries are exhausted
- Jobs that exceed the retry limit are marked as
failed
This introduces basic fault tolerance without adding unnecessary complexity.
- FastAPI was chosen for its clarity, type safety, and automatic API documentation
- Redis was chosen for simplicity and industry relevance
- Separate worker process ensures API responsiveness
- UUID-based job IDs avoid coordination or collisions
The system prioritizes clarity and correctness over feature completeness.
- Redis queue is in-memory and not persistent across restarts
- No priority scheduling
- Single Redis instance
- Tasks are simulated rather than real workloads
These tradeoffs were accepted intentionally to keep the project focused and explainable.
- Persistent storage for completed jobs
- Multiple worker processes for parallel execution
- Job result payloads
- Monitoring and metrics
- Docker Compose for full system orchestration
- Start Redis (via Docker or a local installation)
- Start the API server:
uvicorn api:app --reload- Start a worker process:
python worker.py- Submit jobs via:
http://127.0.0.1:8000/docs
This project was built to better understand backend systems fundamentals such as asynchronous processing, queue-based architectures, and reliability tradeoffs.