Documentation
¶
Overview ¶
Package goratelimit provides production-grade rate limiting for Go with six algorithms, in-memory and Redis backends, and drop-in middleware for net/http, Gin, Echo, Fiber, and gRPC.
Algorithms ¶
- Fixed Window Counter — simple, fixed time intervals
- Sliding Window Log — precise, stores every timestamp
- Sliding Window Counter — weighted approximation, O(1) memory
- Token Bucket — steady refill, burst-friendly
- Leaky Bucket — constant drain, policing or shaping mode
- GCRA — virtual scheduling with sustained rate + burst
- Count-Min Sketch — fixed-memory probabilistic pre-filter
Quick Start ¶
limiter, err := goratelimit.New("", goratelimit.PerMinute(100))
result, _ := limiter.Allow(ctx, "user:123")
if result.Allowed {
// serve request
}
With Redis ¶
limiter, _ := goratelimit.New("redis://localhost:6379", goratelimit.PerMinute(100))
Algorithm-Specific Constructors ¶
limiter, _ := goratelimit.NewTokenBucket(100, 10,
goratelimit.WithRedis(redisClient),
)
Builder API ¶
limiter, _ := goratelimit.NewBuilder().
SlidingWindowCounter(100, 60*time.Second).
Redis(client).
Build()
All algorithms implement the Limiter interface and return a Result with Allowed, Remaining, Limit, ResetAt, and RetryAfter fields.
Index ¶
- Constants
- func CMSMemoryBytes(epsilon, delta float64) int
- type Builder
- func (b *Builder) Build() (Limiter, error)
- func (b *Builder) CMS(limit int64, window time.Duration, epsilon, delta float64) *Builder
- func (b *Builder) DryRun(dryRun bool) *Builder
- func (b *Builder) DryRunLogFunc(fn func(key string, result *Result)) *Builder
- func (b *Builder) FailOpen(v bool) *Builder
- func (b *Builder) FixedWindow(maxRequests int64, window time.Duration) *Builder
- func (b *Builder) GCRA(rate, burst int64) *Builder
- func (b *Builder) HashTag() *Builder
- func (b *Builder) KeyPrefix(prefix string) *Builder
- func (b *Builder) LeakyBucket(capacity, leakRate int64, mode LeakyBucketMode) *Builder
- func (b *Builder) LimitFunc(fn func(ctx context.Context, key string) int64) *Builder
- func (b *Builder) OnLimitExceeded(fn func(ctx context.Context, key string, result *Result)) *Builder
- func (b *Builder) Redis(client redis.UniversalClient) *Builder
- func (b *Builder) SlidingWindow(maxRequests int64, window time.Duration) *Builder
- func (b *Builder) SlidingWindowCounter(maxRequests int64, window time.Duration) *Builder
- func (b *Builder) Store(s store.Store) *Builder
- func (b *Builder) TokenBucket(capacity, refillRate int64) *Builder
- type Clock
- type FakeClock
- type LeakyBucketMode
- type LeakyBucketResult
- type Limiter
- func New(redisURL string, rate Rate, opts ...Option) (Limiter, error)
- func NewCMS(limit, windowSeconds int64, epsilon, delta float64, opts ...Option) (Limiter, error)
- func NewFixedWindow(maxRequests, windowSeconds int64, opts ...Option) (Limiter, error)
- func NewGCRA(rate, burst int64, opts ...Option) (Limiter, error)
- func NewInMemory(rate Rate, opts ...Option) (Limiter, error)
- func NewLeakyBucket(capacity, leakRate int64, mode LeakyBucketMode, opts ...Option) (Limiter, error)
- func NewPreFilter(local, precise Limiter) Limiter
- func NewSlidingWindow(maxRequests, windowSeconds int64, opts ...Option) (Limiter, error)
- func NewSlidingWindowCounter(maxRequests, windowSeconds int64, opts ...Option) (Limiter, error)
- func NewTokenBucket(capacity, refillRate int64, opts ...Option) (Limiter, error)
- type Option
- func WithClock(clock Clock) Option
- func WithDryRun(dryRun bool) Option
- func WithDryRunLogFunc(fn func(key string, result *Result)) Option
- func WithFailOpen(failOpen bool) Option
- func WithHashTag() Option
- func WithKeyPrefix(prefix string) Option
- func WithLimitFunc(fn func(ctx context.Context, key string) int64) Option
- func WithOnLimitExceeded(fn func(ctx context.Context, key string, result *Result)) Option
- func WithRedis(client redis.UniversalClient) Option
- func WithStore(s store.Store) Option
- type Options
- type Rate
- type Result
Examples ¶
Constants ¶
const Unlimited int64 = -1
Unlimited is the sentinel value for no rate limit. Return it from LimitFunc to allow the key without consuming quota (e.g. trusted users, internal services).
Variables ¶
This section is empty.
Functions ¶
func CMSMemoryBytes ¶ added in v1.1.0
CMSMemoryBytes returns the approximate heap usage of a CMS limiter created with the given error parameters. Useful for capacity planning without constructing a limiter.
Example ¶
package main
import (
"fmt"
goratelimit "github.com/krishna-kudari/ratelimit"
)
func main() {
bytes := goratelimit.CMSMemoryBytes(0.01, 0.001)
fmt.Printf("memory=%d bytes\n", bytes)
}
Output: memory=30464 bytes
Types ¶
type Builder ¶
type Builder struct {
// contains filtered or unexported fields
}
Builder provides a fluent API for constructing a Limiter.
limiter, err := goratelimit.NewBuilder().
FixedWindow(100, 60*time.Second).
Redis(client).
HashTag().
Build()
func NewBuilder ¶
func NewBuilder() *Builder
NewBuilder returns a new Builder with default options.
Example ¶
package main
import (
"context"
"fmt"
"time"
goratelimit "github.com/krishna-kudari/ratelimit"
)
func main() {
limiter, _ := goratelimit.NewBuilder().
SlidingWindowCounter(100, 60*time.Second).
KeyPrefix("api").
FailOpen(true).
Build()
result, _ := limiter.Allow(context.Background(), "user:123")
fmt.Printf("allowed=%v remaining=%d\n", result.Allowed, result.Remaining)
}
Output: allowed=true remaining=99
func (*Builder) CMS ¶ added in v1.1.0
CMS configures a Count-Min Sketch rate limiter. limit is the max requests per window. window is the window duration. epsilon is the acceptable error rate (e.g. 0.01). delta is the failure probability (e.g. 0.001). This algorithm is in-memory only; Redis options are ignored.
func (*Builder) DryRun ¶ added in v1.3.0
DryRun enables dry-run mode: never deny, log when a request would have been denied.
func (*Builder) DryRunLogFunc ¶ added in v1.3.0
DryRunLogFunc sets the logger called when dry run would have denied a request.
func (*Builder) FailOpen ¶
FailOpen sets the fail-open/fail-closed behavior when the backend is unreachable.
func (*Builder) FixedWindow ¶
FixedWindow configures a Fixed Window algorithm. maxRequests is the limit per window. window is the window duration.
func (*Builder) GCRA ¶
GCRA configures a Generic Cell Rate Algorithm limiter. rate is sustained requests per second. burst is the maximum burst.
func (*Builder) LeakyBucket ¶
func (b *Builder) LeakyBucket(capacity, leakRate int64, mode LeakyBucketMode) *Builder
LeakyBucket configures a Leaky Bucket algorithm. capacity is the bucket size. leakRate is tokens leaked per second. mode selects Policing (hard reject) or Shaping (queue with delay).
func (*Builder) LimitFunc ¶
LimitFunc sets a dynamic per-key limit resolver. The function is called on every Allow/AllowN with context and key. Return the limit, goratelimit.Unlimited for no limit, or <= 0 to use the default.
func (*Builder) OnLimitExceeded ¶ added in v1.3.0
func (b *Builder) OnLimitExceeded(fn func(ctx context.Context, key string, result *Result)) *Builder
OnLimitExceeded sets a callback invoked when a request is denied due to rate limit. Use for alerting, analytics, or logging. Not called on backend errors or when DryRun is true.
func (*Builder) Redis ¶
func (b *Builder) Redis(client redis.UniversalClient) *Builder
Redis sets the Redis backend. Accepts any redis.UniversalClient.
func (*Builder) SlidingWindow ¶
SlidingWindow configures a Sliding Window Log algorithm. maxRequests is the limit per window. window is the window duration. Stores every request timestamp; for high throughput prefer SlidingWindowCounter.
func (*Builder) SlidingWindowCounter ¶
SlidingWindowCounter configures a Sliding Window Counter algorithm. maxRequests is the limit per window. window is the window duration. Uses weighted-counter approximation with O(1) memory per key.
func (*Builder) TokenBucket ¶
TokenBucket configures a Token Bucket algorithm. capacity is the burst size. refillRate is tokens added per second.
type Clock ¶ added in v1.3.0
Clock provides the current time. Use in production (nil = time.Now) or inject a fake clock in tests to advance time without time.Sleep.
type FakeClock ¶ added in v1.3.0
type FakeClock struct {
// contains filtered or unexported fields
}
FakeClock is a deterministic clock for testing. Advance time with Advance instead of sleeping.
func NewFakeClock ¶ added in v1.3.0
func NewFakeClock() *FakeClock
NewFakeClock returns a fake clock starting at the Unix epoch. Use Advance to move time forward in tests.
func NewFakeClockAt ¶ added in v1.3.0
NewFakeClockAt returns a fake clock starting at the given time.
type LeakyBucketMode ¶
type LeakyBucketMode string
LeakyBucketMode defines the operating mode of a leaky bucket limiter.
const ( // Policing mode drops requests that exceed capacity (hard rejection). Policing LeakyBucketMode = "policing" // Shaping mode queues requests and assigns a processing delay. Shaping LeakyBucketMode = "shaping" )
type LeakyBucketResult ¶
type LeakyBucketResult struct {
Result
Delay time.Duration // For shaping mode: how long to wait before processing.
}
LeakyBucketResult extends Result with shaping-specific delay information.
type Limiter ¶
type Limiter interface {
// Allow checks whether a single request identified by key should be allowed.
Allow(ctx context.Context, key string) (Result, error)
// AllowN checks whether n requests identified by key should be allowed.
AllowN(ctx context.Context, key string, n int) (Result, error)
// Reset clears all rate limit state for the given key.
Reset(ctx context.Context, key string) error
}
Limiter is the core interface for all rate limiting algorithms. All implementations (in-memory and Redis-backed) satisfy this interface, making algorithms swappable without changing caller code.
Example (AllowN) ¶
package main
import (
"context"
"fmt"
goratelimit "github.com/krishna-kudari/ratelimit"
)
func main() {
limiter, _ := goratelimit.NewTokenBucket(10, 1)
result, _ := limiter.AllowN(context.Background(), "user:123", 3)
fmt.Printf("allowed=%v remaining=%d\n", result.Allowed, result.Remaining)
}
Output: allowed=true remaining=7
Example (Reset) ¶
package main
import (
"context"
"fmt"
goratelimit "github.com/krishna-kudari/ratelimit"
)
func main() {
ctx := context.Background()
limiter, _ := goratelimit.NewFixedWindow(1, 60)
limiter.Allow(ctx, "user:123")
result, _ := limiter.Allow(ctx, "user:123")
fmt.Printf("before reset: allowed=%v\n", result.Allowed)
_ = limiter.Reset(ctx, "user:123")
result, _ = limiter.Allow(ctx, "user:123")
fmt.Printf("after reset: allowed=%v\n", result.Allowed)
}
Output: before reset: allowed=false after reset: allowed=true
func New ¶ added in v1.2.0
New creates a rate limiter with sensible defaults (Fixed Window algorithm). Pass an empty redisURL for in-memory mode, or a Redis URL (e.g. "redis://localhost:6379/0") for distributed mode.
limiter, err := goratelimit.New("", goratelimit.PerMinute(100))
limiter, err := goratelimit.New("redis://localhost:6379", goratelimit.PerMinute(100))
Example ¶
package main
import (
"context"
"fmt"
goratelimit "github.com/krishna-kudari/ratelimit"
)
func main() {
limiter, _ := goratelimit.New("", goratelimit.PerMinute(100))
result, _ := limiter.Allow(context.Background(), "user:123")
fmt.Printf("allowed=%v remaining=%d limit=%d\n", result.Allowed, result.Remaining, result.Limit)
}
Output: allowed=true remaining=99 limit=100
func NewCMS ¶ added in v1.1.0
NewCMS creates a Count-Min Sketch rate limiter that uses fixed memory regardless of the number of unique keys. It approximates per-key counts using two rotating sketches for a sliding window effect.
This is an in-memory-only algorithm — no Redis backend is supported. Its primary use case is as a fast local pre-filter in front of a precise distributed limiter (see NewPreFilter).
limit — max requests per window windowSeconds — window size in seconds epsilon — acceptable error rate (e.g. 0.01 = 1 %) delta — failure probability (e.g. 0.001 = 0.1 %)
Memory: 2 × ⌈e/ε⌉ × ⌈ln(1/δ)⌉ × 8 bytes. Example: ε=0.01, δ=0.001 → ~30 KB fixed.
Example ¶
package main
import (
"context"
"fmt"
goratelimit "github.com/krishna-kudari/ratelimit"
)
func main() {
limiter, _ := goratelimit.NewCMS(100, 60, 0.01, 0.001)
result, _ := limiter.Allow(context.Background(), "user:123")
fmt.Printf("allowed=%v remaining=%d\n", result.Allowed, result.Remaining)
}
Output: allowed=true remaining=99
func NewFixedWindow ¶
NewFixedWindow creates a Fixed Window rate limiter. maxRequests is the maximum requests allowed per window. windowSeconds is the window duration in seconds. Pass WithRedis for distributed mode; omit for in-memory.
Example ¶
package main
import (
"context"
"fmt"
goratelimit "github.com/krishna-kudari/ratelimit"
)
func main() {
limiter, _ := goratelimit.NewFixedWindow(10, 60)
result, _ := limiter.Allow(context.Background(), "user:123")
fmt.Printf("allowed=%v remaining=%d\n", result.Allowed, result.Remaining)
}
Output: allowed=true remaining=9
func NewGCRA ¶
NewGCRA creates a GCRA (Generic Cell Rate Algorithm) rate limiter. rate is the sustained request rate per second. burst is the maximum burst size. Pass WithRedis for distributed mode; omit for in-memory.
Example ¶
package main
import (
"context"
"fmt"
goratelimit "github.com/krishna-kudari/ratelimit"
)
func main() {
limiter, _ := goratelimit.NewGCRA(5, 10)
result, _ := limiter.Allow(context.Background(), "user:123")
fmt.Printf("allowed=%v remaining=%d\n", result.Allowed, result.Remaining)
}
Output: allowed=true remaining=8
func NewInMemory ¶ added in v1.2.0
NewInMemory creates an in-memory rate limiter — ideal for tests and single-process deployments.
limiter, err := goratelimit.NewInMemory(goratelimit.PerMinute(100))
Example ¶
package main
import (
"context"
"fmt"
goratelimit "github.com/krishna-kudari/ratelimit"
)
func main() {
limiter, _ := goratelimit.NewInMemory(goratelimit.PerHour(500))
result, _ := limiter.Allow(context.Background(), "user:123")
fmt.Printf("allowed=%v limit=%d\n", result.Allowed, result.Limit)
}
Output: allowed=true limit=500
func NewLeakyBucket ¶
func NewLeakyBucket(capacity, leakRate int64, mode LeakyBucketMode, opts ...Option) (Limiter, error)
NewLeakyBucket creates a Leaky Bucket rate limiter. capacity is the bucket size. leakRate is tokens leaked per second. mode selects Policing (hard reject) or Shaping (queue with delay). Pass WithRedis for distributed mode; omit for in-memory.
Example (Policing) ¶
package main
import (
"context"
"fmt"
goratelimit "github.com/krishna-kudari/ratelimit"
)
func main() {
limiter, _ := goratelimit.NewLeakyBucket(10, 1, goratelimit.Policing)
result, _ := limiter.Allow(context.Background(), "user:123")
fmt.Printf("allowed=%v remaining=%d\n", result.Allowed, result.Remaining)
}
Output: allowed=true remaining=9
Example (Shaping) ¶
package main
import (
"context"
"fmt"
goratelimit "github.com/krishna-kudari/ratelimit"
)
func main() {
limiter, _ := goratelimit.NewLeakyBucket(10, 1, goratelimit.Shaping)
result, _ := limiter.Allow(context.Background(), "user:123")
fmt.Printf("allowed=%v\n", result.Allowed)
}
Output: allowed=true
func NewPreFilter ¶ added in v1.1.0
NewPreFilter creates a rate limiter that checks the local limiter first and only escalates to the precise limiter when the local check passes.
Under normal traffic both limiters are consulted and the precise limiter's result is authoritative. Under attack the local limiter absorbs the load, shielding the remote backend from being overwhelmed.
cms, _ := goratelimit.NewCMS(100, 60, 0.01, 0.001) gcra, _ := goratelimit.NewGCRA(10, 20, goratelimit.WithRedis(client)) limiter := goratelimit.NewPreFilter(cms, gcra)
Example ¶
package main
import (
"context"
"fmt"
goratelimit "github.com/krishna-kudari/ratelimit"
)
func main() {
local, _ := goratelimit.NewCMS(100, 60, 0.01, 0.001)
precise, _ := goratelimit.NewGCRA(5, 10)
limiter := goratelimit.NewPreFilter(local, precise)
result, _ := limiter.Allow(context.Background(), "user:123")
fmt.Printf("allowed=%v remaining=%d\n", result.Allowed, result.Remaining)
}
Output: allowed=true remaining=8
func NewSlidingWindow ¶
NewSlidingWindow creates a Sliding Window Log rate limiter. maxRequests is the maximum requests allowed per window. windowSeconds is the window duration in seconds. Note: this algorithm stores every request timestamp and has O(n) memory per key. For high-throughput keys, prefer NewSlidingWindowCounter. Pass WithRedis for distributed mode; omit for in-memory.
Example ¶
package main
import (
"context"
"fmt"
goratelimit "github.com/krishna-kudari/ratelimit"
)
func main() {
limiter, _ := goratelimit.NewSlidingWindow(10, 60)
result, _ := limiter.Allow(context.Background(), "user:123")
fmt.Printf("allowed=%v remaining=%d\n", result.Allowed, result.Remaining)
}
Output: allowed=true remaining=9
func NewSlidingWindowCounter ¶
NewSlidingWindowCounter creates a Sliding Window Counter rate limiter. This uses the weighted-counter approximation (~1% error) with O(1) memory per key. maxRequests is the maximum requests allowed per window. windowSeconds is the window duration in seconds. Pass WithRedis for distributed mode; omit for in-memory.
Example ¶
package main
import (
"context"
"fmt"
goratelimit "github.com/krishna-kudari/ratelimit"
)
func main() {
limiter, _ := goratelimit.NewSlidingWindowCounter(10, 60)
result, _ := limiter.Allow(context.Background(), "user:123")
fmt.Printf("allowed=%v remaining=%d\n", result.Allowed, result.Remaining)
}
Output: allowed=true remaining=9
func NewTokenBucket ¶
NewTokenBucket creates a Token Bucket rate limiter. capacity is the maximum number of tokens (burst size). refillRate is the number of tokens added per second. Pass WithRedis for distributed mode; omit for in-memory.
Example ¶
package main
import (
"context"
"fmt"
goratelimit "github.com/krishna-kudari/ratelimit"
)
func main() {
limiter, _ := goratelimit.NewTokenBucket(100, 10)
result, _ := limiter.Allow(context.Background(), "user:123")
fmt.Printf("allowed=%v remaining=%d\n", result.Allowed, result.Remaining)
}
Output: allowed=true remaining=99
type Option ¶
type Option func(*Options)
Option is a functional option for configuring a Limiter.
func WithClock ¶ added in v1.3.0
WithClock sets the clock used for time. In tests, pass a FakeClock and call Advance to simulate elapsed time without time.Sleep.
Example ¶
package main
import (
"context"
"fmt"
"time"
goratelimit "github.com/krishna-kudari/ratelimit"
)
func main() {
clock := goratelimit.NewFakeClock()
limiter, _ := goratelimit.NewFixedWindow(2, 60, goratelimit.WithClock(clock))
ctx := context.Background()
limiter.Allow(ctx, "k")
limiter.Allow(ctx, "k")
r, _ := limiter.Allow(ctx, "k")
fmt.Printf("before advance: allowed=%v\n", r.Allowed)
clock.Advance(61 * time.Second)
r, _ = limiter.Allow(ctx, "k")
fmt.Printf("after advance: allowed=%v\n", r.Allowed)
}
Output: before advance: allowed=false after advance: allowed=true
func WithDryRun ¶ added in v1.3.0
WithDryRun enables dry-run mode: the limiter never denies; when a request would have been denied, DryRunLogFunc is called (or [DRYRUN] is logged). Use for safe production rollout to observe what would be rate limited.
Example ¶
package main
import (
"context"
"fmt"
goratelimit "github.com/krishna-kudari/ratelimit"
)
func main() {
limiter, _ := goratelimit.NewFixedWindow(2, 60, goratelimit.WithDryRun(true))
ctx := context.Background()
limiter.Allow(ctx, "key")
limiter.Allow(ctx, "key")
// Would be denied without dry run; with dry run still allowed
r, _ := limiter.Allow(ctx, "key")
fmt.Printf("allowed=%v (limit=%d remaining=%d)\n", r.Allowed, r.Limit, r.Remaining)
}
Output: allowed=true (limit=2 remaining=0)
func WithDryRunLogFunc ¶ added in v1.3.0
WithDryRunLogFunc sets the logger called when DryRun is true and a request would have been denied. If nil, log.Printf with [DRYRUN] prefix is used.
func WithFailOpen ¶
WithFailOpen controls behavior when the backend is unreachable. If true (default), requests are allowed on errors. If false, requests are denied on errors.
func WithHashTag ¶
func WithHashTag() Option
WithHashTag enables Redis Cluster hash-tag wrapping. Keys become "prefix:{key}" so all keys for a given user route to the same Redis Cluster slot. Required for multi-key algorithms (Sliding Window Counter) in Cluster mode.
func WithKeyPrefix ¶
WithKeyPrefix sets the prefix prepended to all storage keys. Default: "ratelimit".
func WithLimitFunc ¶
WithLimitFunc sets a dynamic limit resolver. The function is called on every Allow/AllowN with the request context and key. Use context for plan-based limits (e.g. ctx.Value("plan")). Return the effective limit, Unlimited for no limit, or <= 0 (other than Unlimited) to use the construction-time default.
Example ¶
package main
import (
"context"
"fmt"
goratelimit "github.com/krishna-kudari/ratelimit"
)
func main() {
limiter, _ := goratelimit.NewFixedWindow(5, 60,
goratelimit.WithLimitFunc(func(ctx context.Context, key string) int64 {
if key == "premium" {
return 1000
}
return 0
}),
)
ctx := context.Background()
r1, _ := limiter.Allow(ctx, "premium")
r2, _ := limiter.Allow(ctx, "free")
fmt.Printf("premium: limit=%d\nfree: limit=%d\n", r1.Limit, r2.Limit)
}
Output: premium: limit=1000 free: limit=5
func WithOnLimitExceeded ¶ added in v1.3.0
WithOnLimitExceeded sets a callback invoked when a request is denied due to rate limit. Use for alerting, analytics, or logging. Not called on backend errors or when DryRun is true.
func WithRedis ¶
func WithRedis(client redis.UniversalClient) Option
WithRedis configures the limiter to use Redis as its backing store. Accepts any redis.UniversalClient: *redis.Client (standalone), *redis.ClusterClient (cluster), *redis.Ring (ring), or sentinel. When set, the limiter operates in distributed mode.
type Options ¶
type Options struct {
// Store is the pluggable backend for rate limit state.
// Takes precedence over RedisClient if both are set.
Store store.Store
// RedisClient is a Redis connection for distributed rate limiting.
// Accepts *redis.Client, *redis.ClusterClient, *redis.Ring, or any
// redis.UniversalClient implementation.
RedisClient redis.UniversalClient
// KeyPrefix is prepended to all storage keys.
// Default: "ratelimit".
KeyPrefix string
// FailOpen controls behavior when the backend is unreachable.
// If true (default), requests are allowed on errors.
// If false, requests are denied on errors.
FailOpen bool
// HashTag enables Redis Cluster hash-tag wrapping of user keys.
// When true, keys are formatted as "prefix:{key}" instead of "prefix:key",
// ensuring all keys for the same logical entity route to the same slot.
// This is required for Sliding Window Counter (multi-key) and recommended
// for any Redis Cluster deployment.
HashTag bool
// LimitFunc dynamically resolves the rate limit for each key.
// Called with the request context (e.g. from middleware) so limits can depend on
// user plan, JWT claims, or other context values. Returns the effective limit
// (maxRequests / capacity / burst). Return Unlimited for no limit; return <= 0
// (other than Unlimited) to use the construction-time default.
LimitFunc func(ctx context.Context, key string) int64
// Clock provides the current time. If nil, time.Now is used.
// Inject a FakeClock in tests to advance time without time.Sleep.
Clock Clock
// DryRun, when true, never denies: Allow/AllowN always return Allowed=true,
// but when a request would have been denied, the optional DryRunLogFunc is
// called (or log.Printf with [DRYRUN] prefix if nil) so operators can see
// what would be rate limited.
DryRun bool
// DryRunLogFunc is called when DryRun is true and a request would have been
// denied. If nil, log.Printf("[DRYRUN] would deny key=...") is used.
DryRunLogFunc func(key string, result *Result)
// OnLimitExceeded is called when a request is denied due to rate limit.
// Use for alerting, analytics, or logging. Not called on backend errors or in dry-run.
OnLimitExceeded func(ctx context.Context, key string, result *Result)
}
Options configures behavior shared across all algorithm implementations.
func (*Options) FormatKey ¶
FormatKey builds a storage key. With HashTag enabled the user key is wrapped in {}: "prefix:{key}" so all derived keys for the same user land on the same Redis Cluster slot.
func (*Options) FormatKeySuffix ¶
FormatKeySuffix builds a storage key with an additional suffix. "prefix:{key}:suffix" (hash-tag) or "prefix:key:suffix" (plain).
type Rate ¶ added in v1.2.0
type Rate struct {
// contains filtered or unexported fields
}
Rate specifies a request limit over a time window. Create one with PerSecond, PerMinute, or PerHour.
Source Files
¶
Directories
¶
| Path | Synopsis |
|---|---|
|
Package cache provides an L1 in-process cache that wraps any Limiter.
|
Package cache provides an L1 in-process cache that wraps any Limiter. |
|
examples
|
|
|
advanced
command
Advanced features — dynamic limits, fail-open, local cache, Prometheus, custom keys.
|
Advanced features — dynamic limits, fail-open, local cache, Prometheus, custom keys. |
|
basic
command
All seven algorithms — direct usage, AllowN, Reset, PreFilter, and Builder API.
|
All seven algorithms — direct usage, AllowN, Reset, PreFilter, and Builder API. |
|
demo
command
|
|
|
echoserver
command
Complete Echo server with rate limiting middleware.
|
Complete Echo server with rate limiting middleware. |
|
fiberserver
command
Complete Fiber server with rate limiting middleware.
|
Complete Fiber server with rate limiting middleware. |
|
ginserver
command
Complete Gin server with rate limiting middleware.
|
Complete Gin server with rate limiting middleware. |
|
grpcserver
command
gRPC server with rate limiting interceptors.
|
gRPC server with rate limiting interceptors. |
|
httpserver
command
Complete net/http server with rate limiting middleware.
|
Complete net/http server with rate limiting middleware. |
|
redis
command
Rate limiting with Redis backend — works with standalone, Cluster, Ring, Sentinel.
|
Rate limiting with Redis backend — works with standalone, Cluster, Ring, Sentinel. |
|
Package metrics provides Prometheus instrumentation for rate limiters.
|
Package metrics provides Prometheus instrumentation for rate limiters. |
|
This file is kept for backward-compatibility documentation.
|
This file is kept for backward-compatibility documentation. |
|
echomw
Package echomw provides Echo middleware for rate limiting.
|
Package echomw provides Echo middleware for rate limiting. |
|
fibermw
Package fibermw provides Fiber middleware for rate limiting.
|
Package fibermw provides Fiber middleware for rate limiting. |
|
ginmw
Package ginmw provides Gin middleware for rate limiting.
|
Package ginmw provides Gin middleware for rate limiting. |
|
grpcmw
Package grpcmw provides gRPC server interceptors for rate limiting.
|
Package grpcmw provides gRPC server interceptors for rate limiting. |
|
Package store defines the backend storage contract for rate limiters.
|
Package store defines the backend storage contract for rate limiters. |
|
memory
Package memory provides an in-memory implementation of store.Store.
|
Package memory provides an in-memory implementation of store.Store. |
|
redis
Package redis provides a Redis-backed implementation of store.Store.
|
Package redis provides a Redis-backed implementation of store.Store. |