A blazingly fast embedded database with Redis-compatible protocol, built in Rust.
ToonStore is a high-performance key-value store that gives you the speed of embedded databases (5.28M ops/sec) with the convenience of Redis compatibility. Use it as an embedded library for maximum performance, or run it as a Redis-compatible server accessible from any language.
TOON (Token-Oriented Object Notation) is a compact, human-readable data format specifically designed for the age of AI and LLMs. In a world where every token countsβboth for cost and context window limitsβTOON provides 40-60% token savings compared to JSON while maintaining higher LLM comprehension accuracy.
- π€ LLM-Optimized: Achieves 74% accuracy vs JSON's 70% in LLM comprehension benchmarks
- π° Cost-Efficient: ~40% fewer tokens = 40% lower API costs for AI applications
- π Schema-Aware: Explicit
[N]lengths and{fields}headers help LLMs parse data reliably - π JSON-Compatible: Encodes the same objects, arrays, and primitives with lossless round-trips
- ποΈ Human-Readable: YAML-like readability with CSV-style compactness
JSON (22,250 tokens):
{
"metrics": [
{"date": "2025-01-01", "views": 5715, "clicks": 211, "conversions": 28, "revenue": 7976.46},
{"date": "2025-01-02", "views": 7103, "clicks": 393, "conversions": 28, "revenue": 8360.53}
]
}TOON (9,120 tokens - 59% reduction):
metrics[2]{date,views,clicks,conversions,revenue}:
2025-01-01,5715,211,28,7976.46
2025-01-02,7103,393,28,8360.53
ToonStore uses the TOON format for efficient data storage. To learn more about the format specification:
- TOON Format Repository - Official spec and implementations
- TOON Specification - Complete technical specification
- Benchmarks - Token efficiency & accuracy comparisons
ToonStore is a persistent key-value database that combines the token-efficient TOON format with extreme performance:
- π Extreme performance - 5.28M ops/sec for cached reads, 66x faster than network databases
- πΎ TOON format storage - Token-efficient data format perfect for AI/LLM applications
- π° Cost-efficient - ~40% fewer tokens means lower storage and transmission costs
- π Redis compatibility - Works with existing Redis clients (Node.js, Python, Go, etc.)
- π¦ Embedded mode - Use directly in Rust applications for maximum speed
- π Network mode - Run as a server, connect from any language
- Redis is fast but volatile (RAM-only by default) and complex to persist data
- PostgreSQL/MySQL are reliable but slower for key-value workloads
- RocksDB/LevelDB are fast but lack network access and use inefficient storage formats
- Traditional formats (JSON, XML) waste tokens and storage space in the AI era
ToonStore combines the best of all worlds with TOON format storage:
| Feature | ToonStore | Redis | PostgreSQL | RocksDB |
|---|---|---|---|---|
| Speed | 5.28M ops/sec (embedded) | ~80k ops/sec | ~65k ops/sec | ~100k ops/sec |
| Persistent | β Yes | β Optional | β Yes | β Yes |
| Token-Efficient Format | β TOON (~40% savings) | β Binary | β Binary | β Binary |
| Redis Protocol | β Yes | β Yes | β No | β No |
| Embedded Mode | β Yes | β No | β No | β Yes |
| Network Mode | β Yes | β Yes | β Yes | β No |
| Multi-language | β Yes | β Yes | β Yes |
- 5.28M operations/second in embedded mode (cached reads)
- 215k ops/sec for storage operations (66x faster than network)
- 32M deletions/second (320x faster than Redis)
- ~40% fewer tokens compared to JSON storage
- Perfect for AI/LLM applications - lower costs, faster processing
- Human-readable - easy to inspect and debug stored data
- Schema-aware - built-in structure validation
- All data stored on disk using efficient TOON format
- Survives restarts and crashes
- Memory-mapped I/O for fast disk access
- Use any Redis client library (50+ languages supported)
- Familiar commands:
GET,SET,DEL,EXISTS,KEYS - Drop-in replacement for Redis in many use cases
Network Mode:
// Connect from Node.js, Python, Go, etc.
const redis = require('redis');
const client = redis.createClient({ url: 'redis://localhost:6379' });
await client.set('key', 'value');Embedded Mode:
// Direct Rust integration (66x faster!)
let cache = ToonCache::new("./data", 10000)?;
let id = cache.put(b"data")?;
let data = cache.get(id)?;- Automatic caching of hot data in RAM
- 10,000 item default capacity (configurable)
- No manual cache management needed
- Single binary, no dependencies
- Docker images available
- Works on Linux, Windows, macOS
- Cross-platform (amd64 and arm64)
ToonStore saves all data persistently on disk using the efficient TOON format:
Data is stored in TOON (Token-Oriented Object Notation) format, which provides:
- ~40% fewer tokens compared to JSON
- Human-readable format for easy inspection
- LLM-optimized for better AI comprehension
- Persistent - survives restarts and crashes
By default, ToonStore saves data to:
./data/ # Data directory
βββ toon.db # Main database file (TOON format)
βββ index.db # Index mappings
When you save this data:
{
"id": "user:1",
"name": "John Doe",
"email": "john@example.com",
"age": 30
}ToonStore converts and stores it in TOON format:
user:1{id,name,email,age}:
user:1,John Doe,john@example.com,30
This format is:
- β Persistent - Written to disk immediately
- β Efficient - Smaller file size (~40% reduction)
- β Fast - Quick to parse and serialize
- β Cached - Hot data kept in memory (LRU cache)
ToonStore provides:
-
Automatic Persistence
- Every
SEToperation writes to disk - No manual save/flush required
- Data survives process restarts
- Every
-
Memory-Mapped I/O
- Fast disk access via OS-level caching
- Automatic synchronization
- Efficient memory usage
-
Built-in LRU Cache
- 10,000 item default capacity
- 5.28M ops/sec for cached reads
- Automatic cache invalidation
# Specify data directory
tstd --data /path/to/data
# Configure cache size
tstd --capacity 50000# View stored data files
ls -lh ./data/
# Check database size
redis-cli DBSIZE
# Get server info
redis-cli INFOIn the age of AI and LLMs, the TOON format provides significant advantages:
- Cost Savings - ~40% fewer tokens = 40% lower API costs
- Context Efficiency - More data fits in LLM context windows
- Better Comprehension - 74% accuracy vs JSON's 70% in LLM benchmarks
- Schema-Aware - Explicit structure helps LLMs parse reliably
Learn more about TOON: TOON Format Repository
ToonStore is designed for speed:
| Operation | ToonStore (Embedded) | ToonStore (Network) | Redis | PostgreSQL |
|---|---|---|---|---|
| GET (cached) | 5.28M ops/sec | ~80k ops/sec | ~80k ops/sec | ~65k ops/sec |
| GET (storage) | 215k ops/sec | ~70k ops/sec | ~65k ops/sec | ~65k ops/sec |
| SET | 82k ops/sec | ~60k ops/sec | ~60k ops/sec | ~55k ops/sec |
| DELETE | 32M ops/sec | ~100k ops/sec | ~100k ops/sec | ~70k ops/sec |
Key Insight: Embedded mode is 66x faster than network mode (no TCP overhead)
See BENCHMARKS.md for detailed benchmarks and methodology.
# Pull from Docker Hub
docker pull toonstore/toonstoredb:latest
# Or pull from GitHub Container Registry
docker pull ghcr.io/kalama-tech/toonstoredb:latest
# Run ToonStore
docker run -d \
--name toonstore \
-p 6379:6379 \
-v toonstore_data:/data \
toonstore/toonstoredb:latest
# Test connection
redis-cli -h 127.0.0.1 -p 6379 PING
# Output: PONG
# Use it
redis-cli -h 127.0.0.1 -p 6379
127.0.0.1:6379> SET mykey "Hello World"
OK
127.0.0.1:6379> GET mykey
"Hello World"With Docker Compose:
# Download docker-compose.yml
curl -O https://raw.githubusercontent.com/Kalama-Tech/toonstoredb/main/docker-compose.yml
# Start
docker-compose up -d
# Stop
docker-compose down# Linux/macOS
curl -L https://github.com/Kalama-Tech/toonstoredb/releases/latest/download/tstd -o tstd
chmod +x tstd
./tstd --bind 0.0.0.0:6379
# Windows
# Download from: https://github.com/Kalama-Tech/toonstoredb/releases# Clone repository
git clone https://github.com/Kalama-Tech/toonstoredb.git
cd toonstoredb
# Build (requires Rust 1.70+)
cargo build --release
# Run
./target/release/tstd --bind 0.0.0.0:6379# Cargo.toml
[dependencies]
tooncache = { git = "https://github.com/Kalama-Tech/toonstoredb" }use tooncache::ToonCache;
fn main() -> Result<(), Box<dyn std::error::Error>> {
// Open database
let cache = ToonCache::new("./data", 10000)?;
// Store data
let id = cache.put(b"Hello, World!")?;
// Retrieve data
let data = cache.get(id)?;
println!("Retrieved: {:?}", String::from_utf8(data)?);
Ok(())
}ToonStore is Redis-compatible, so you can use any Redis client library:
const redis = require('redis');
const client = redis.createClient({ url: 'toonstore://localhost:6379' });
await client.connect();
await client.set('user:1', 'John Doe');
const user = await client.get('user:1');
console.log(user); // "John Doe"Note: ToonStore uses
toonstore://as its connection string prefix for branding, but it's fully Redis-compatible soredis://also works with any Redis client library.
import redis
client = redis.from_url('toonstore://localhost:6379')
client.set('user:1', 'John Doe')
user = client.get('user:1')
print(user) # b'John Doe'import "github.com/redis/go-redis/v9"
client := redis.NewClient(&redis.Options{
Addr: "localhost:6379",
})
client.Set(ctx, "user:1", "John Doe", 0)
user, _ := client.Get(ctx, "user:1").Result()
fmt.Println(user) // "John Doe"See docs/connecting-from-apps.md for more examples.
- β Dual Mode: Network (Redis-compatible) or Embedded (5.28M ops/sec)
- β TOON Format: Token-efficient storage (~40% savings vs JSON) designed for AI/LLMs
- β LRU Cache: Automatic caching with 5.28M ops/sec cached reads
- β RESP Protocol: Works with any Redis client library
- β Memory-Mapped I/O: Fast disk access with OS-level caching
- β Cross-Platform: Linux, Windows, macOS
- β Docker Ready: Official images on Docker Hub
PING, ECHO - Connection testing
GET, SET, DEL - Core operations
EXISTS, KEYS - Key inspection
DBSIZE, FLUSHDB - Database management
INFO - Server statistics
- Quick Start Guide - Your first ToonStore app
- Installation - All installation methods
- Docker Deployment - Complete Docker guide
- Connection Guide - Network vs Embedded mode
- Architecture - 3-layer architecture & connection strings
- Rust API Reference - Embedded library usage
- RESP Server Guide - Network server setup
- Configuration - Server & cache tuning
- TOON Format - Storage format specification
- TOON Format - Storage format specification
- Performance Tuning - Optimization guide
- Benchmarks - Detailed performance data
- Docker Guide - Container deployment
- Production Checklist - Before going live
- Monitoring - Health checks & metrics
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Application β
βββββββββββββββββββ¬βββββββββββββββββββββββ¬ββββββββββββββββββββββββ
β β
Network Mode (tstd) Embedded Mode (library)
redis://host:port ToonCache::new()
~70k ops/sec 5.28M ops/sec
β β
ββββββββββββ¬ββββββββββββ
β
ββββββββββββββββββββββββββββββββββββ
β tooncache (LRU Cache) β
β - 5.28M ops/sec (cached reads) β
β - Configurable capacity β
ββββββββββββββββ¬ββββββββββββββββββββ
β
ββββββββββββββββββββββββββββββββββββ
β toonstoredb (Storage Engine) β
β - 215k ops/sec (storage reads) β
β - TOON format parser β
β - Memory-mapped files β
ββββββββββββββββββββββββββββββββββββ
- π€ AI/LLM applications - Token-efficient storage for embeddings, prompts, and context
- π High-performance caching (5.28M ops/sec!)
- π¦ Embedded databases in Rust applications
- π Redis replacement with better performance and efficiency
- πΎ Key-value storage with persistence and TOON format
- β‘ In-process caching with disk backup
- π° Cost-sensitive applications - Reduce storage and transmission costs by ~40%
- π ACID transactions
- π Complex queries / JOINs
- π Multi-node clustering
- π Strong consistency guarantees
# Clone repository
git clone https://github.com/Kalama-Tech/toonstoredb
cd toonstoredb
# Build release
cargo build --release
# Run server
./target/release/tstd --bind 0.0.0.0:6379# Install server binary
cargo install tstd
# Add to your Rust project
cargo add tooncache# Pull image
docker pull ghcr.io/yourusername/toonstore:latest
# Run server
docker run -d \
-p 6379:6379 \
-v $(pwd)/data:/data \
ghcr.io/yourusername/toonstore:latestSee DOCKER_SETUP_GUIDE.md for complete Docker setup.
Python
import redis
client = redis.from_url('toonstore://localhost:6379')
client.set('key', 'value')Node.js
const Redis = require('ioredis');
const client = new Redis('toonstore://localhost:6379');
await client.set('key', 'value');Go
import "github.com/redis/go-redis/v9"
client := redis.NewClient(&redis.Options{Addr: "localhost:6379"})
client.Set(ctx, "key", "value", 0)Rust
use tooncache::ToonCache;
let cache = ToonCache::new("./data", 10000)?;Python (Coming Week 4)
import toonstore
db = toonstore.ToonCache("./data", capacity=10000)tstd \
--bind 0.0.0.0:6379 \ # Bind address
--data ./data \ # Data directory
--capacity 10000 # Cache capacityRUST_LOG=info # Logging level (info, debug, trace)let cache = ToonCache::new(
"./data", // Data directory
10000 // Cache capacity
)?;# Connect with redis-cli
redis-cli -h 127.0.0.1 -p 6379
# Get statistics
127.0.0.1:6379> INFO
# Server
toonstore_version:0.1.0
# Stats
total_keys:1000
cache_size:850
cache_capacity:10000
cache_hits:95000
cache_misses:5000
cache_hit_ratio:0.95
# Check database size
127.0.0.1:6379> DBSIZE
(integer) 1000# Docker health check
tstd --health
# Or via TCP
redis-cli PING
PONG# Check if port is already in use
netstat -an | grep 6379
# Try different port
tstd --bind 127.0.0.1:6380# Check server is running
ps aux | grep tstd
# Check firewall
sudo ufw allow 6379/tcp# Increase cache capacity
tstd --capacity 50000
# Check cache hit ratio
redis-cli INFO | grep cache_hit_ratio
# Use embedded mode for maximum performanceSee docs/troubleshooting.md for more solutions.
- Storage engine (toonstoredb)
- LRU cache (tooncache)
- RESP server (tstd)
- Basic benchmarks
- Docker support
- Python bindings (PyO3)
- npm package (Neon)
- Complete documentation
- PyPI + npm publish
- WAL for durability
- Transactions
- Replication
- More RESP commands
- Clustering support
ToonStore is in active development. We welcome contributions!
# Clone
git clone https://github.com/Kalama-Tech/toonstoredb
cd toonstoredb
# Build
cargo build
# Run tests
cargo test
# Run benchmarks
cargo bench- Write tests for new features
- Run
cargo fmtandcargo clippy - Update documentation
- Follow existing code style
See CONTRIBUTING.md for details.
ToonStore is licensed under the MIT License.
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Documentation: docs/
Built with β€οΈ in Rust | Performance: 5.28M ops/sec | License: MIT