Skip to content

Sourish-07/async-job-queue

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Async Job Queue (FastAPI + Redis)

Overview

This project implements a simple asynchronous job queue system using FastAPI, Redis, and background worker processes.

The goal of the project is to demonstrate core backend systems concepts such as asynchronous task execution, queue-based architectures, fault tolerance, and separation of concerns — without unnecessary complexity.

The system allows clients to submit jobs via an HTTP API, process them in the background, and query their status independently of execution.


Problem Motivation

In synchronous systems, long-running tasks block request handling, leading to poor performance and unreliable behavior under load.

Real-world backend systems avoid this by decoupling job submission from job execution, allowing work to be processed asynchronously by background workers.

This project explores that design pattern in a minimal and approachable way.


System Architecture

High-level flow:

Client → FastAPI API → Redis Queue → Worker → Redis (Job Status)

Components

  • FastAPI Accepts job submissions and exposes job status endpoints.

  • Redis Acts as an in-memory job queue and metadata store.

  • Worker A background process that executes jobs and handles retries.

Each component has a single responsibility, making the system easy to understand and extend.


API Design

Submit Job

POST /submit-job

Request body:

{
  "task": "send_email"
}

Response:

{
  "job_id": "uuid"
}

Get Job Status

GET /job-status/{job_id}

Response:

{
  "job_id": "uuid",
  "task": "send_email",
  "status": "pending | processing | completed | failed",
  "retries": 1
}

The API remains stateless and does not perform any background work directly.


Queue Design

Jobs are stored in Redis using two structures:

  • A Redis list (job_queue) that holds job IDs
  • Per-job metadata stored as JSON under keys like job:{job_id}

This design keeps queue operations lightweight while allowing job state to be updated independently.

Redis is used for its atomic operations and simplicity, rather than being reimplemented manually.


Worker Design

Workers run as separate processes and continuously poll the Redis queue.

For each job, the worker:

  1. Marks the job as processing
  2. Executes the task
  3. Marks the job as completed or retries on failure

To simulate real-world conditions, tasks may fail intermittently.


Retry Logic

  • Maximum of 3 retries per job
  • Failed jobs are requeued until retries are exhausted
  • Jobs that exceed the retry limit are marked as failed

This introduces basic fault tolerance without adding unnecessary complexity.


Design Decisions

  • FastAPI was chosen for its clarity, type safety, and automatic API documentation
  • Redis was chosen for simplicity and industry relevance
  • Separate worker process ensures API responsiveness
  • UUID-based job IDs avoid coordination or collisions

The system prioritizes clarity and correctness over feature completeness.


Tradeoffs & Limitations

  • Redis queue is in-memory and not persistent across restarts
  • No priority scheduling
  • Single Redis instance
  • Tasks are simulated rather than real workloads

These tradeoffs were accepted intentionally to keep the project focused and explainable.


Possible Extensions

  • Persistent storage for completed jobs
  • Multiple worker processes for parallel execution
  • Job result payloads
  • Monitoring and metrics
  • Docker Compose for full system orchestration

How to Run

  1. Start Redis (via Docker or a local installation)
  2. Start the API server:
uvicorn api:app --reload
  1. Start a worker process:
python worker.py
  1. Submit jobs via:
http://127.0.0.1:8000/docs

Purpose

This project was built to better understand backend systems fundamentals such as asynchronous processing, queue-based architectures, and reliability tradeoffs.

About

Lightweight asynchronous job queue implementation in Python for handling background tasks, retries, prioritization, and worker management—useful for scalable backend services or distributed systems.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages