Skip to content
Open Source · MIT · Rust 450+ stars

Your AI agent is drowning
in CLI noise. Fix it.

rtk compresses command outputs before they reach the context window. Better reasoning. Longer sessions. Lower costs.

89% avg. noise removed
3x longer sessions
30+ commands
Viking warrior smashing token noise with his axe
Terminal
$ rtk gain

📊 RTK Token Savings
════════════════════════════════════════

Total commands:    2,927
Input tokens:      11.6M
Output tokens:     1.4M
Tokens saved:      10.3M (89.2%)

By Command:
────────────────────────────────────────
Command               Count      Saved     Avg%
rtk find                324       6.8M    78.3%
rtk git status          215       1.4M    80.8%
rtk grep                227     786.7K    49.5%
rtk cargo test           16      50.1K    91.8%

$
Viking bracing against a tidal wave of tokens

The problem with AI coding today

Every command your agent runs pollutes the context window with noise. Here's what that costs you.

Context pollution

Your 200K context window isn't infinite. When cargo test dumps 5,000 tokens of boilerplate, that's 5,000 tokens less for reasoning about your actual code.

Worse AI reasoning

Sessions too short

Context overflows, the agent restarts, you lose the thread. On flat-rate plans, you hit rate limits 40% faster than you should.

3x shorter sessions

Costs that explode

On pay-per-token setups (API, Gemini CLI, Aider), 70% of your bill is noise the LLM doesn't need. A team of 10 wastes ~$1,750/month.

~70% wasted spend
Viking compressing tokens into a glowing crystal

See the difference

Real outputs, real savings. Side-by-side comparison on actual commands.

cargo test ~4,823 tokens
warning: unused variable: `start`
   --> src/init.rs:561:17
    |
561 |             let start = i;
    |                 ^^^^^ help: prefix it with an underscore

warning: unused variable: `original_keys`
    --> src/init.rs:1287:13

warning: constant `BILLION` is never used
  --> src/cc_economics.rs:17:7

warning: `rtk` (bin "rtk" test) generated 17 warnings
    Finished `test` profile target(s) in 0.20s
     Running unittests src/main.rs

running 262 tests
test cargo_cmd::tests::test_filter_cargo_build_success ... ok
test cargo_cmd::tests::test_filter_cargo_clippy_clean ... ok
test cargo_cmd::tests::test_filter_cargo_install_empty ... ok
test cargo_cmd::tests::test_filter_cargo_install_already ... ok
test cargo_cmd::tests::test_filter_cargo_install_from_path ... ok
test cargo_cmd::tests::test_filter_cargo_install_replace ... ok
test cargo_cmd::tests::test_filter_cargo_install_success ... ok
test cargo_cmd::tests::test_filter_cargo_test_all_pass ... ok
test cargo_cmd::tests::test_filter_cargo_test_failures ... ok
test cc_economics::tests::test_compute_dual_metrics ... ok
test cc_economics::tests::test_compute_weighted_metrics ... ok
test cc_economics::tests::test_period_economics_new ... ok
test curl_cmd::tests::test_filter_curl_json ... ok
test diff_cmd::tests::test_compute_diff_identical ... ok
test diff_cmd::tests::test_compute_diff_added_lines ... ok
test diff_cmd::tests::test_condense_unified_diff ... ok
test filter::tests::test_filter_level_parsing ... ok
test filter::tests::test_language_detection ... ok
test git::tests::test_compact_diff ... ok
test git::tests::test_filter_branch_output ... ok
test git::tests::test_filter_log_output ... ok
test git::tests::test_filter_status_with_args ... ok
test init::tests::test_hook_already_present ... ok
test init::tests::test_init_is_idempotent ... ok
test json_cmd::tests::test_extract_schema_simple ... ok
test ls::tests::test_compact_basic ... ok
test ls::tests::test_compact_filters_noise ... ok
... 235 more tests, all passing

test result: ok. 262 passed; 0 failed; 0 ignored; 0 measured
rtk cargo test ~11 tokens -99%
✓ cargo test: 262 passed (1 suite, 0.08s)
pytest -v ~756 tokens
===== test session starts ======
platform darwin -- Python 3.14.3, pytest-9.0.2
cachedir: .pytest_cache
rootdir: /app
plugins: anyio-4.12.1
collected 33 items

test_utils.py::TestStringUtils::test_strip PASSED    [  3%]
test_utils.py::TestStringUtils::test_upper PASSED    [  6%]
test_utils.py::TestStringUtils::test_split PASSED    [  9%]
test_utils.py::TestStringUtils::test_join PASSED     [ 12%]
test_utils.py::TestStringUtils::test_replace PASSED  [ 15%]
test_utils.py::TestStringUtils::test_starts PASSED   [ 18%]
test_utils.py::TestStringUtils::test_ends PASSED     [ 21%]
test_utils.py::TestStringUtils::test_contains PASSED [ 24%]
test_utils.py::TestMathUtils::test_sqrt PASSED       [ 27%]
test_utils.py::TestMathUtils::test_ceil PASSED       [ 30%]
test_utils.py::TestMathUtils::test_floor PASSED      [ 33%]
test_utils.py::TestMathUtils::test_pow PASSED        [ 36%]
test_utils.py::TestMathUtils::test_factorial PASSED  [ 39%]
test_utils.py::TestMathUtils::test_gcd PASSED        [ 42%]
test_utils.py::TestMathUtils::test_isnan PASSED      [ 45%]
test_utils.py::TestMathUtils::test_pi PASSED         [ 48%]
test_utils.py::TestJsonUtils::test_dumps PASSED      [ 51%]
test_utils.py::TestJsonUtils::test_loads PASSED      [ 54%]
test_utils.py::TestJsonUtils::test_roundtrip PASSED  [ 57%]
test_utils.py::TestJsonUtils::test_pretty PASSED     [ 60%]
test_utils.py::TestPathUtils::test_join PASSED       [ 63%]
test_utils.py::TestPathUtils::test_basename PASSED   [ 66%]
test_utils.py::TestPathUtils::test_dirname PASSED    [ 69%]
test_utils.py::TestPathUtils::test_splitext PASSED   [ 72%]
test_utils.py::TestPathUtils::test_exists PASSED     [ 75%]
test_utils.py::TestListOps::test_sort PASSED         [ 78%]
test_utils.py::TestListOps::test_reverse PASSED      [ 81%]
test_utils.py::TestListOps::test_filter PASSED       [ 84%]
test_utils.py::TestListOps::test_map PASSED          [ 87%]
test_utils.py::TestListOps::test_zip PASSED          [ 90%]
test_utils.py::TestListOps::test_enumerate PASSED    [ 93%]
test_utils.py::TestListOps::test_comprehension PASSED[ 96%]
test_utils.py::TestListOps::test_flatten PASSED      [100%]

====== 33 passed in 0.05s ======
rtk test "pytest -v" ~24 tokens -96%
📊 SUMMARY:
  ====== 33 passed in 0.02s ======
go test ./... -v ~592 tokens
=== RUN   TestControllers
Running Suite: Controller Suite
Random Seed: 1771093287

Will run 1 of 1 specs
•

Ran 1 of 1 Specs in 6.037 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending
--- PASS: TestControllers (6.04s)
PASS
ok  kubecraft.ai/.../controller  6.610s
=== RUN   TestNewAWSBedrockClient
--- PASS: TestNewAWSBedrockClient (0.00s)
=== RUN   TestNewAWSBedrockClientWithConfig
--- PASS: TestNewAWSBedrockClientWithConfig (0.00s)
=== RUN   TestAWSBedrockGenerateCode_Success
--- PASS: TestAWSBedrockGenerateCode_Success (0.00s)
=== RUN   TestNewAzureOpenAIClient
--- PASS: TestNewAzureOpenAIClient (0.00s)
=== RUN   TestAzureOpenAIGenerateCode_Success
--- PASS: TestAzureOpenAIGenerateCode_Success (0.00s)
=== RUN   TestAzureOpenAIGenerateCode_Error
--- PASS: TestAzureOpenAIGenerateCode_Error (0.00s)
=== RUN   TestNewMistralClient
--- PASS: TestNewMistralClient (0.00s)
=== RUN   TestGenerateCode_Success
--- PASS: TestGenerateCode_Success (0.00s)
=== RUN   TestGenerateCode_Error
--- PASS: TestGenerateCode_Error (0.00s)
=== RUN   TestNewVertexAIClient
--- PASS: TestNewVertexAIClient (0.00s)
=== RUN   TestVertexAIGenerateCode_Success
--- PASS: TestVertexAIGenerateCode_Success (0.00s)
=== RUN   TestVertexAIGenerateCode_Error
--- PASS: TestVertexAIGenerateCode_Error (0.00s)
PASS
ok  kubecraft.ai/.../llm  0.776s
rtk test "go test ./... -v" ~246 tokens -58%
📊 SUMMARY:
  --- PASS: TestControllers (5.07s)
  ok  kubecraft.ai/.../controller  5.847s
  --- PASS: TestNewAWSBedrockClient (0.00s)
  --- PASS: TestNewAWSBedrockClientWithConfig (0.00s)
  --- PASS: TestAWSBedrockGenerateCode_Success (0.00s)
  --- PASS: TestNewAzureOpenAIClient (0.00s)
  --- PASS: TestAzureOpenAIGenerateCode_Success (0.00s)
  --- PASS: TestAzureOpenAIGenerateCode_Error (0.00s)
  --- PASS: TestNewMistralClient (0.00s)
  --- PASS: TestGenerateCode_Success (0.00s)
  --- PASS: TestGenerateCode_Error (0.00s)
  --- PASS: TestNewVertexAIClient (0.00s)
  --- PASS: TestVertexAIGenerateCode_Success (0.00s)
  --- PASS: TestVertexAIGenerateCode_Error (0.00s)
  ok  kubecraft.ai/.../llm  0.246s
git diff HEAD~1 ~21,500 tokens
diff --git a/index.html b/index.html
index 1b7488b..0ebac4f 100644
--- a/index.html
+++ b/index.html
@@ -629,7 +629,7 @@
       width: 100%;
       border-collapse: collapse;
       font-size: 0.88rem;
-      min-width: 800px;
+      min-width: 1050px;
     }

     .compare-table th {
@@ -1051,6 +1051,114 @@
       font-size: 0.78rem;
     }

+    /* === Share My Gain === */
+    .share-gain { background: var(--bg); }
+
+    .share-gain-card {
+      max-width: 600px;
+      margin: 0 auto;
+      padding: 40px 36px;
+      border: 1px solid var(--border);
+      border-radius: var(--radius-lg);
... 855 insertions(+), 266 deletions(-)
rtk git diff HEAD~1 ~1,259 tokens -94%
index.html | 1121 ++++++++++++------
 1 file changed, 855 ins(+), 266 del(-)

--- Changes ---

📄 index.html
  @@ -629,7 +629,7 @@
  -      min-width: 800px;
  +      min-width: 1050px;
  @@ -1051,6 +1051,114 @@
  +    /* === Share My Gain === */
  +    .share-gain { background: var(--bg); }
  @@ -1124,6 +1232,39 @@
  +    /* === Language Switcher === */
  ... (truncated)
git status ~120 tokens
On branch master
Your branch is up to date with 'origin/master'.

Changes not staged for commit:
  (use "git add <file>..." to update what will be committed)
  (use "git restore <file>..." to discard changes in working directory)
	modified:   index.html
	modified:   src/main.rs
	modified:   src/config.rs

Untracked files:
  (use "git add <file>..." to include in what will be committed)
	.fastembed_cache/
	tests/

no changes added to commit (use "git add" and/or "git commit -a")
rtk git status ~30 tokens -75%
📌 master...origin/master
📝 Modified: 3 files
   index.html
   src/main.rs
   src/config.rs
❓ Untracked: 2 files
   .fastembed_cache/
   tests/
git log --stat -10 ~1,430 tokens
commit c84fa3c fix: add website URL (rtk-ai.app)
Author: patrick szymkowiak <52030887+pszymkowiak@...>
Date:   Thu Feb 12 12:28:37 2026 +0100

    - Cargo.toml: add homepage field
    - README.md: add website/github/install links
    - Formula/rtk.rb: update homepage to website

 .github/workflows/release.yml | 4 ++--
 Cargo.toml                    | 1 +
 Formula/rtk.rb                | 2 +-
 README.md                     | 8 ++++++++
 5 files changed, 14 insertions(+), 3 deletions(-)

commit a0d2184 feat(ci): automate Homebrew formula
Author: patrick szymkowiak <52030887+pszymkowiak@...>
... 8 more commits with full stats
rtk git log -n 10 ~194 tokens -86%
c84fa3c fix: add website URL (rtk-ai.app) (#81)
a0d2184 feat(ci): automate Homebrew formula (#80)
55d010a fix: update stale repo URLs (#78)
9e764c4 chore(master): release 0.13.1
bd76b36 fix(ci): fix release workflow artifacts
0d388b3 chore(master): release 0.13.0
93364b5 feat(sqlite): custom db location
3dde699 chore(master): release 0.12.0
645a773 feat(cargo): cargo install filtering
22182e4 chore(master): release 0.11.0
cat src/main.rs ~10,176 tokens
mod cargo_cmd;
mod cc_economics;
mod ccusage;
mod config;
mod container;
mod curl_cmd;
mod deps;
mod diff_cmd;
mod discover;
mod display_helpers;
mod env_cmd;
mod filter;
mod find_cmd;
mod gain;
mod gh_cmd;
mod git;
mod grep_cmd;
mod init;
mod json_cmd;
mod learn;
mod lint_cmd;
mod local_llm;
mod log_cmd;
mod ls;
mod next_cmd;
mod npm_cmd;
mod parser;
mod playwright_cmd;
... 1,295 lines total (enums, match arms, CLI parsing)
rtk read src/main.rs -l aggressive ~504 tokens -95%
use anyhow::{Context, Result};
use clap::{Parser, Subcommand};
use std::ffi::OsString;
use std::path::{Path, PathBuf};
struct Cli {
    // ... implementation
enum Commands {
    // ... implementation
enum GitCommands {
    // ... implementation
enum DockerCommands {
    // ... implementation
enum KubectlCommands {
    // ... implementation
enum CargoCommands {
    // ... implementation
fn main() -> Result<()> {
    // ... implementation
grep -rn "pub fn" src/ ~2,108 tokens
src/cargo_cmd.rs:17:pub fn run(cmd: CargoCommand, args: &[String], verbose: u8) -> Result<()> {
src/cargo_cmd.rs:551:pub fn run_passthrough(args: &[OsString], verbose: u8) -> Result<()> {
src/cc_economics.rs:184:pub fn run(
src/ccusage.rs:118:pub fn is_available() -> bool {
src/ccusage.rs:127:pub fn fetch(granularity: Granularity) -> Result<Option<Vec<CcusagePeriod>>> {
src/config.rs:73:    pub fn load() -> Result<Self> {
src/config.rs:85:    pub fn save(&self) -> Result<()> {
src/config.rs:97:    pub fn create_default() -> Result<PathBuf> {
src/config.rs:109:pub fn show_config() -> Result<()> {
src/container.rs:16:pub fn run(cmd: ContainerCmd, args: &[String], verbose: u8) -> Result<()> {
src/curl_cmd.rs:7:pub fn run(args: &[String], verbose: u8) -> Result<()> {
src/deps.rs:8:pub fn run(path: &Path, verbose: u8) -> Result<()> {
src/diff_cmd.rs:8:pub fn run(file1: &Path, file2: &Path, verbose: u8) -> Result<()> {
src/filter.rs:57:    pub fn from_extension(ext: &str) -> Self {
src/filter.rs:309:pub fn get_filter(level: FilterLevel) -> Box<dyn FilterStrategy> {
... 112 matches across 47 files
rtk grep "pub fn" src/ ~940 tokens -55%
🔍 112 in 47F:

📄 src/cargo_cmd.rs (3):
    17: pub fn run(cmd: CargoCommand, ...) -> Result<()>
   551: pub fn run_passthrough(args: ...) -> Result<()>

📄 src/config.rs (4):
    73: pub fn load() -> Result<Self>
    85: pub fn save(&self) -> Result<()>
    97: pub fn create_default() -> Result<PathBuf>
   109: pub fn show_config() -> Result<()>

📄 src/git.rs (2):
    22: pub fn run(cmd: GitCommand, ...) -> Result<()>
  1264: pub fn run_passthrough(args: ...) -> Result<()>

📄 src/filter.rs (4):
    57: pub fn from_extension(ext: &str) -> Self
   309: pub fn get_filter(level: ...) -> Box<...>
... +62
find . -name "*.rs" ~276 tokens
./target/debug/build/serde_core-.../out/private.rs
./target/debug/build/libsqlite3-sys-.../out/bindgen.rs
./target/debug/build/serde-.../out/private.rs
./src/ls.rs
./src/local_llm.rs
./src/learn/detector.rs
./src/learn/report.rs
./src/learn/mod.rs
./src/discover/registry.rs
./src/discover/provider.rs
./src/discover/report.rs
./src/discover/mod.rs
./src/wget_cmd.rs
./src/npm_cmd.rs
./src/cargo_cmd.rs
./src/ccusage.rs
./src/config.rs
./src/lint_cmd.rs
./src/curl_cmd.rs
./src/prisma_cmd.rs
./src/cc_economics.rs
./src/find_cmd.rs
./src/gain.rs
./src/git.rs
... 49 files total
rtk find "*.rs" . ~149 tokens -46%
📁 49F 4D:

src/ cargo_cmd.rs cc_economics.rs
  ccusage.rs config.rs container.rs
  curl_cmd.rs deps.rs diff_cmd.rs
  display_helpers.rs env_cmd.rs
  filter.rs find_cmd.rs gain.rs
  gh_cmd.rs git.rs grep_cmd.rs
  init.rs json_cmd.rs lint_cmd.rs
  local_llm.rs log_cmd.rs ls.rs
  main.rs ...
src/discover/ mod.rs provider.rs
  registry.rs report.rs
src/learn/ detector.rs mod.rs report.rs
src/parser/ error.rs formatter.rs
  mod.rs types.rs
ls -la src/ ~3,200 tokens
total 928
drwxr-xr-x  41 patrick  staff   1312  2 feb 21:43 .
drwxr-xr-x  25 patrick  staff    800  2 feb 21:35 ..
-rw-r--r--   1 patrick  staff  16394  2 feb 21:35 cargo_cmd.rs
-rw-r--r--   1 patrick  staff  27220  2 feb 21:35 cc_economics.rs
-rw-r--r--   1 patrick  staff   9503  2 feb 21:35 ccusage.rs
-rw-r--r--   1 patrick  staff   2884  2 feb 21:35 config.rs
-rw-r--r--   1 patrick  staff  12886  2 feb 21:35 container.rs
-rw-r--r--   1 patrick  staff   3406  2 feb 21:35 curl_cmd.rs
-rw-r--r--   1 patrick  staff   9040  2 feb 21:35 deps.rs
-rw-r--r--   1 patrick  staff  10667  2 feb 21:35 diff_cmd.rs
drwxr-xr-x   6 patrick  staff    192  2 feb 21:35 discover
-rw-r--r--   1 patrick  staff   9732  2 feb 21:35 display_helpers.rs
-rw-r--r--   1 patrick  staff   5500  2 feb 21:35 env_cmd.rs
-rw-r--r--   1 patrick  staff  12156  2 feb 21:35 filter.rs
-rw-r--r--   1 patrick  staff   3007  2 feb 21:35 find_cmd.rs
-rw-r--r--   1 patrick  staff  10648  2 feb 21:35 gain.rs
-rw-r--r--   1 patrick  staff  24865  2 feb 21:35 gh_cmd.rs
-rw-r--r--   1 patrick  staff  36053  2 feb 21:35 git.rs
-rw-r--r--   1 patrick  staff   5263  2 feb 21:35 grep_cmd.rs
... 20 more lines
rtk ls src/ ~640 tokens -80%
discover/
parser/
cargo_cmd.rs  16.0K
cc_economics.rs  26.6K
ccusage.rs  9.3K
config.rs  2.8K
container.rs  12.6K
curl_cmd.rs  3.3K
deps.rs  8.8K
diff_cmd.rs  10.4K
display_helpers.rs  9.5K
env_cmd.rs  5.4K
filter.rs  11.9K
find_cmd.rs  2.9K
gain.rs  10.4K
...

37 files, 2 dirs (37 .rs)
cat Cargo.toml ~368 tokens
[package]
name = "rtk"
version = "0.13.1"
edition = "2021"
authors = ["Patrick Szymkowiak"]
description = "Rust Token Killer - High-performance CLI proxy..."
license = "MIT"
homepage = "https://www.rtk-ai.app"
repository = "https://github.com/rtk-ai/rtk"
readme = "README.md"
keywords = ["cli", "llm", "token", "filter", "productivity"]
categories = ["command-line-utilities", "development-tools"]

[dependencies]
clap = { version = "4", features = ["derive"] }
anyhow = "1.0"
ignore = "0.4"
walkdir = "2"
regex = "1"
lazy_static = "1.4"
serde = { version = "1", features = ["derive"] }
serde_json = { version = "1", features = ["preserve_order"] }
colored = "2"
dirs = "5"
rusqlite = { version = "0.31", features = ["bundled"] }
toml = "0.8"
chrono = "0.4"
thiserror = "1.0"
tempfile = "3"

[dev-dependencies]

[profile.release]
opt-level = 3
lto = true
... + cargo-deb, cargo-generate-rpm metadata
rtk deps ~55 tokens -85%
📦 Rust (Cargo.toml):
  Dependencies (15):
    clap (4)
    anyhow (1.0)
    ignore (0.4)
    walkdir (2)
    regex (1)
    lazy_static (1.4)
    serde (1)
    serde_json (1)
    colored (2)
    dirs (5)
    ... +5 more
Viking king on throne with 88.9% holographic display

Real-world savings

Actual rtk gain output from a happy developer.

A developer's feedback

After a few weeks of daily use: 15,720 commands processed, 138M tokens saved. Just run rtk gain to see yours.

88.9% efficiency
rtk gain dashboard showing 88.9% efficiency

Detailed breakdown

Daily, weekly, and monthly stats by command. Track your savings over time.

Per-command analytics
rtk gain daily and weekly breakdown
Viking squad marching through datacenter corridor

No AI tool offers unlimited usage. Every token counts.

Even at $200/mo, every tool has caps. RTK compresses CLI noise so your limits stretch further.

A typical 2h coding session with an AI agent:

~60 CLI commands run
~210K tokens of CLI noise
~23K with RTK (89% less)

Without RTK, CLI output alone can overflow a 200K context window. Based on avg 3,500 tokens/command measured across 2,900+ real commands.

Claude Code Terminal
Price $20 — $200/mo
Limits ~45 msgs/5h (Pro), 5-20x on Max
Context 200K tokens
Sessions ~3x longer
Even Max $200/mo (20x Pro) has weekly caps (240-480h). Quota resets every 5h. RTK compresses CLI outputs by avg 89%, so each message carries less noise and your quota stretches ~3x.
Cursor IDE
Price $20 — $200/mo
Limits $20 credits/mo (Pro), ~225 Claude reqs
Context Up to 200K (Max mode)
Credits go ~2x further
Even Ultra $200/mo (20x credits) is capped. Each request consumes credits based on model — Claude burns 2.4x faster than Gemini. RTK compresses CLI outputs so each request starts cleaner.
OpenAI Codex Agent
Price $20/mo (Plus) — $200/mo (Pro)
Limits 30-1,500 msgs/5h by plan
Context 192K tokens
More iterations per cap
Included with ChatGPT plans. Pro $200/mo caps at 1,500 msgs/5h. The agent runs commands autonomously — each output eats your cap. RTK compresses them for more iterations per window.
Windsurf IDE
Price $15 — $60/mo
Limits 500 credits/mo (Pro)
Context 200K tokens
Credits last ~2x longer
Enterprise $60/user gets 1,000 credits/mo — still capped. Cascade consumes credits per prompt. RTK compresses CLI outputs so each interaction uses fewer tokens, stretching your credits.
Gemini CLI Terminal
Price Free — pay-per-token
Limits 1,000 req/day, 60 req/min (free)
Context 1M tokens
~70% less on token bill
Free tier is generous (1,000 req/day) but still rate-limited. Beyond that, you pay per token. RTK compresses CLI outputs by avg 89%, cutting your bill or freeing rate limit headroom.
Aider Terminal
Price Free + API costs ($5-300+/mo)
Limits Per API provider
Context Per model (up to 200K)
~70% less API cost
BYO API key — you pay per token to OpenAI, Anthropic, etc. RTK compresses every command output before it reaches the model, directly cutting your API bill by ~70% on CLI-heavy workflows.
GitHub Copilot IDE
Price Free — $39/mo (Pro+)
Limits 50-1,500 premium req/mo
Context Per model (up to 200K)
Better context quality
Enterprise $39/user: 1,000 premium req/mo. Base completions are unlimited, but Chat and the coding agent have caps. RTK keeps terminal output lean so premium requests carry more useful context.
Cline / Roo VS Code
Price Free + API costs ($0-500+/mo)
Limits Per API provider
Context Per model (up to 200K)
~70% less API cost
No tool-side limit, but your API provider caps apply. Heavy users report $200-500+/mo. RTK compresses every output by avg 89%, directly cutting your bill and reducing context overflow.

Pricing verified Feb 2026. Limits vary by usage and plan. RTK savings based on avg 89% compression across 2,900+ real commands.

Coming Soon

RTK Cloud

Visibility and control over your team's AI coding costs. Know what's wasted. Fix it.

Token analytics

Dashboard per dev, per project, per tool

Team savings reports

"Your team saved $4,200 this month"

Rate limit alerts

Monitoring & smart notifications

Enterprise controls

SSO, audit logs, compliance

Free for open-source. Teams from $15/dev/month.

0 teams on the waitlist

No spam. One email when we launch.

Viking slamming axe into ground creating green energy shockwave

Get started in 30 seconds

Install, activate the auto-rewrite hook, and every command is compressed automatically.

Quick Install

One-liner for Linux & macOS

curl -fsSL https://raw.githubusercontent.com/rtk-ai/rtk/refs/heads/master/install.sh | sh

Via Homebrew

macOS & Linux

brew install rtk
brew upgrade rtk

Pre-built Binaries

macOS, Linux, Windows

Then activate the auto-rewrite hook

rtk init --global

This installs a PreToolUse hook that transparently rewrites Bash commands to rtk equivalents.

1 curl ... | sh
2 rtk init --global
3 rtk gain
Viking raising axe in triumphant war cry
Victorious viking standing over crushed tokens

Your AI doesn't need
to read all that.

Install rtk. Better code, longer sessions, lower costs.

Starred by developers at

Apple · AWS · Barclays · Bosch · Canva · Cisco · Datadog · Deloitte · ENI · Google · Hitachi · HPE · IBM · Meta · Microsoft · OVHcloud · Zendesk · Apple · AWS · Barclays · Bosch · Canva · Cisco · Datadog · Deloitte · ENI · Google · Hitachi · HPE · IBM · Meta · Microsoft · OVHcloud · Zendesk