translate is a command-line tool for translating text and files with configurable providers, prompt presets, and TOML-based configuration.
LLM-backed providers (openai, anthropic, gemini, open-responses, ollama, openai-compatible, and apple-intelligence) use AnyLanguageModel as the model/provider abstraction layer.
Install via Homebrew tap:
brew tap atacan/tap
brew install atacan/tap/translateVerify:
translate --version
translate --helpBuild from source (alternative):
swift build -c release
sudo install -m 755 "$(swift build -c release --show-bin-path)/translate" /usr/local/bin/translateRelease and Homebrew automation docs: docs/release.md
These examples assume you already configured a provider (see Provider Setup below).
Important: put options before positional text/file arguments (for example, translate --to de README.md, not translate README.md --to de).
Translate inline text:
translate --text --from tr --to en "Merhaba dunya"Auto-detect source language:
translate --text --to fr "Hello world"Preview resolved prompts and settings without sending a request:
translate --provider ollama --text --to en --dry-run "Merhaba dunya"Defaults:
- Default provider:
openai - Default model (openai):
gpt-4o-mini - Default source language:
auto - Default target language:
en
export OPENAI_API_KEY="your_api_key"
translate --text --to en "Merhaba dunya"export ANTHROPIC_API_KEY="your_api_key"
translate --provider anthropic --text --to en "Merhaba dunya"export GEMINI_API_KEY="your_api_key"
translate --provider gemini --text --to en "Merhaba dunya"export OPEN_RESPONSES_API_KEY="your_api_key"
translate --provider open-responses --text --to en "Merhaba dunya"translate --provider ollama --model llama3.2 --text --to en "Merhaba dunya"Use ad-hoc flags:
translate --base-url http://localhost:1234/v1 --api-key dummy --model llama3.1 --text --to en "Merhaba dunya"Or configure named endpoints (recommended):
translate config set providers.openai-compatible.lmstudio.base_url http://localhost:1234/v1
translate config set providers.openai-compatible.lmstudio.model llama3.1
translate config set providers.openai-compatible.lmstudio.api_key dummy
translate --provider lmstudio --text --to en "Merhaba dunya"export DEEPL_API_KEY="your_api_key"
translate --provider deepl --text --to en "Merhaba dunya"Notes:
--base-urlwithout--providerautomatically usesopenai-compatible.openai-compatiblenow requires an API key (some local endpoints may accept any placeholder string).--provider openaiand--base-urlcannot be used together.apple-translateandapple-intelligenceare available on macOS 26+.
translate accepts input from positional arguments, files, globs, or stdin.
Without --text, a single positional argument is treated as a file if that path exists; otherwise it is treated as text.
translate --to es "How are you?"Use --text to force literal text mode:
translate --text --to es "README.md"Single file to stdout:
translate --to de docs/input.mdSingle file to stdout with streaming enabled:
translate --stream --to de docs/input.mdStreaming control note:
--streamforces streaming on for the current command.--no-streamforces streaming off for the current command.- They are both needed because
defaults.streamcan enable streaming globally inconfig.toml, and each command still needs a direct way to override that default in either direction.
Single file to explicit output path:
translate --to de --output docs/input.de.md docs/input.mdIn-place overwrite:
translate --to de --in-place docs/input.mdUse shell-expanded file lists:
translate --to fr --suffix _fr docs/*.mdOr quote patterns so translate expands the glob:
translate --to fr "docs/**/*.md"Behavior for multiple files or globs:
- Output is written per-file.
- Default suffix is
_<LANG>(for example_FR). --outputis only valid for a single input file.- Use
--jobsto process multiple files concurrently.
echo "Merhaba dunya" | translate --to enBuilt-in presets:
generalmarkdownxcode-stringslegalui
List presets:
translate presets listShow preset prompts:
translate presets show markdownUse a preset:
translate --preset markdown --to fr README.mdOverride prompt templates directly:
translate --text --to en \
--system-prompt "You are a strict translator from {from} to {to}." \
--user-prompt "Translate this {format}: {text}" \
"Merhaba dunya"Load prompt template from files with @path:
translate --text --to en \
--system-prompt @./prompts/system.txt \
--user-prompt @./prompts/user.txt \
"Merhaba dunya"Available placeholders:
{from}{to}{text}{context}{context_block}{filename}{format}
Default config path:
~/.config/translate/config.toml
Override config path:
- CLI:
--config /path/to/config.toml - Environment:
TRANSLATE_CONFIG=/path/to/config.toml
Inspect config:
translate config path
translate config show
translate config get defaults.providerSet and unset values:
translate config set defaults.provider anthropic
translate config set defaults.to fr
translate config set defaults.stream true
translate config set defaults.jobs 4
translate config unset defaults.jobsWhy both --stream and --no-stream exist:
- Config can set a global default with
defaults.stream = trueorfalse. --streamis the per-command override that forces streaming on.--no-streamis the per-command override that forces streaming off.- Without both flags, users with a global preference would lose the ability to invert it for one command without editing config.
Edit in $EDITOR:
translate config editExample config.toml:
[defaults]
provider = "openai"
from = "auto"
to = "en"
preset = "general"
format = "auto"
stream = false
yes = false
jobs = 1
[network]
timeout_seconds = 120
retries = 3
retry_base_delay_seconds = 1
[providers.openai]
model = "gpt-4o-mini"
[providers.openai-compatible.lmstudio]
base_url = "http://localhost:1234/v1"
model = "llama3.1"
api_key = ""
[presets.markdown]
user_prompt = "Translate this markdown from {from} to {to}: {text}"Network settings above apply to providers using the custom HTTP path (currently DeepL). AnyLanguageModel-backed LLM providers use the library's networking behavior.
Main translation options:
--textforce literal positional text mode--output, -o <path>write output to a file--in-place, -ioverwrite source files--suffix <suffix>suffix for per-file outputs--yes, -yskip overwrite confirmations--jobs, -j <n>parallel file jobs--from, -f <lang>source language (autoallowed)--to, -t <lang>target language (autonot allowed)--provider, -p <name>provider--model, -m <id>model identifier--base-url <url>openai-compatible base URL--api-key <key>API key override--preset <name>prompt preset--system-prompt <text|@file>system prompt override--user-prompt <text|@file>user prompt override--context, -c <text>extra context--format <auto|text|markdown|html>format hint--dry-runprint resolved prompts/provider/model and exit--quiet, -qsuppress warnings--verbose, -vverbose diagnostics
Subcommands:
translate config ...translate presets ...
0success1runtime error2invalid arguments3aborted by user
OPENAI_API_KEY is required for provider 'openai'
- Set
OPENAI_API_KEY, switch provider, or changedefaults.provider.
'auto' is not valid for --to
- Use a concrete target language such as
--to fr.
--output can only be used with a single input
- Use one input file with
--output, or use--suffixfor multi-file workflows.
No files matched the pattern ...
- Quote glob patterns when you want
translateto expand them itself, and verify paths.
An example script for running the compiled binary directly is available at:
examples/run-translation-from-build.sh