Proxify is a high-performance reverse proxy gateway written in Go. It allows developers to access various large model APIs through a unified entry point — solving problems such as regional restrictions and multi-service configuration complexity. Proxify is deeply optimized for LLM streaming responses, ensuring the best performance and smooth user experience.
-
💎 Powerful Extensibility: More than just an AI gateway — Proxify is a universal reverse proxy server with special optimizations for LLM APIs, including stream smoothing, heartbeat keepalive, and tail acceleration.
-
🚀 Unified API Entry: Route to multiple upstreams through a single-level path — e.g.,
/openai→api.openai.com,/gemini→generativelanguage.googleapis.com. All routes are defined in one configuration file for simplicity and efficiency. -
⚡ Lightweight & High Performance: Built with Golang and natively supports high concurrency with minimal memory usage. Runs smoothly on servers with as little as 0.5 GB RAM.
-
🚄 Stream Optimization:
- Smooth Output: Built-in flow controller ensures a "typing effect" by streaming model responses smoothly.
- Heartbeat Keepalive: Automatically inserts heartbeat messages into SSE (Server-Sent Events) streams to prevent idle timeouts.
- Tail Acceleration: Keeps latency under control by accelerating the final part of the response.
-
🛡️ Security & Privacy: Fully self-hosted — all requests and data remain under your control. No third-party servers are involved, ensuring zero privacy risk.
-
🌐 Broad Compatibility: Preconfigured routes for major AI providers like OpenAI, Azure, Claude, Gemini, and DeepSeek. Easily extendable to any HTTP API via configuration.
-
🔧 Easy Integration: Switch from your existing API service to Proxify simply by updating the
BaseURL— no code changes or request parameter modifications required. -
👨💻 Open Source & Professional: Designed and maintained by a young and experienced AI engineering team. 100% open-source, auditable, and community-driven (PRs and Issues are welcome).
- Backend Gateway: Golang + Gin
- Frontend Dashboard: React + Vite + TypeScript + Tailwind CSS
Integrating your existing services with Proxify only takes three steps.
Browse the Supported API list and find the proxy path prefix (Path) for the desired service.
Replace the original API base URL in your code with your Proxify deployment address, appending the route prefix.
- Original:
https://api.openai.com/v1/chat/completions - Replaced with:
http://<your-proxify-domain>/openai/v1/chat/completions
Done! Use your existing API key and parameters as usual. Your headers and request body remain unchanged.
import OpenAI from "openai";
const openai = new OpenAI({
apiKey: "sk-...", // your OpenAI API key
baseURL: "http://127.0.0.1:7777/openai/v1", // your Proxify address
});
async function main() {
const stream = await openai.chat.completions.create({
model: "gpt-5",
messages: [{ role: "user", content: "hi" }],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || "");
}
}
main();Proxify offers multiple deployment options. Before starting, make sure you’ve completed the setup steps below.
Proxify includes .env.example and routes.json.example.
Copy and adjust them to your needs.
cp .env.example .envExample:
# Mode: debug | release
MODE=debug
# Server port
PORT=7777
# Optional GitHub token
GITHUB_TOKEN=ghp_xxxx
# Stream optimization
STREAM_SMOOTHING_ENABLED=true
STREAM_HEARTBEAT_ENABLED=true
# IP whitelist (optional)
# Supports single IP, CIDR notation, and multiple entries separated by commas
AUTH_IP_WHITELIST="127.0.0.1,10.0.0.0/8,192.168.1.0/24,::1"
# Token-based authentication (optional)
AUTH_TOKEN_HEADER="X-API-Token"
AUTH_TOKEN_KEY="your-super-secret-token"💡 Tips:
For Docker, mount
.envinto/app/.envinside the container.For local binary, keep
.envin the same directory as the executable.All configuration items marked as “optional” (such as
GITHUB_TOKEN,AUTH_IP_WHITELIST,AUTH_TOKEN_*) are disabled when left empty or unset.
cp routes.json.example routes.jsonExample:
{
"routes": [
{
"name": "OpenAI",
"description": "OpenAI Official API Endpoint",
"path": "/openai",
"target": "https://api.openai.com/",
"model_map": {
"gpt-4o": "gpt-4o-2024-11-20"
}
},
{
"name": "DeepSeek",
"description": "DeepSeek Official API Endpoint",
"path": "/deepseek",
"target": "https://api.deepseek.com"
},
{
"name": "Claude",
"description": "Anthropic Claude Official API Endpoint",
"path": "/claude",
"target": "https://api.anthropic.com"
},
{
"name": "Gemini",
"description": "Google Gemini Official API Endpoint",
"path": "/gemini",
"target": "https://generativelanguage.googleapis.com"
}
]
}
Routes can be modified freely — changes are automatically hot-reloaded without restarting the service.
Supports route-level rewriting of the
modelfield in the request body, commonly used for model aliases, automatic fallback, or cross-platform compatibility.
We provide three convenient Docker deployment methods.
This is the fastest and most recommended way to deploy in production.
# 1. Pull the latest image from Docker Hub
docker pull poixeai/proxify:latest
# 2. Run the container and mount configuration files
docker run -d \
--name proxify \
-p 7777:7777 \
-v $(pwd)/routes.json:/app/routes.json \
-v $(pwd)/.env:/app/.env \
--restart=always \
poixeai/proxify:latestManage your service declaratively via a docker-compose.yml file for better maintainability.
-
Ensure the
docker-compose.ymlfile exists in your current directory. -
Start the service:
# Start the service docker-compose up -d # Check service status docker-compose ps
If you want to build your own image based on the latest source code.
# 1. Clone the repository
git clone https://github.com/poixeai/proxify.git
cd proxify
# 2. Build your own image (for example, name it my-proxify)
docker build -t poixeai/proxify:latest .
# 3. Run the image you just built
docker run -d \
--name proxify \
-p 7777:7777 \
-v $(pwd)/routes.json:/app/routes.json \
-v $(pwd)/.env:/app/.env \
--restart=always \
poixeai/proxify:latestFor development environments or when Docker is not preferred.
Requirements:
- Go (version 1.20+)
- Node.js (version 18+)
- pnpm
We provide a build.sh script to simplify the compilation process.
# 1. Clone the repository and enter the directory
git clone https://github.com/poixeai/proxify.git
cd proxify
# 2. Grant execution permission to the script
chmod +x build.sh
# 3. Run the build script
./build.sh
# 4. Run the compiled binary
./bin/proxifyIf you prefer to understand the full build process.
# 1. Clone the repository and enter the directory
git clone https://github.com/poixeai/proxify.git
cd proxify
# 2. Build frontend static assets
cd web
pnpm install
pnpm build
cd ..
# 3. Build backend Go application
go mod tidy
go build -o ./bin/proxify .
# 4. Run the binary
./bin/proxifyProxify can proxy any HTTP service. Below are the preconfigured and optimized AI API routes:
| Provider | Path | Target URL |
|---|---|---|
| OpenAI | /openai |
https://api.openai.com |
| Azure | /azure |
https://<your-res>.openai.azure.com |
| DeepSeek | /deepseek |
https://api.deepseek.com |
| Claude | /claude |
https://api.anthropic.com |
| Gemini | /gemini |
https://generativelanguage.googleapis.com |
| Grok | /grok |
https://api.x.ai |
| Aliyun | /aliyun |
https://dashscope.aliyuncs.com |
| VolcEngine | /volcengine |
https://ark.cn-beijing.volces.com |
routes.json configuration.
GET https://proxify.poixe.com/api/routesWe welcome all contributions — whether it’s filing an issue, submitting a PR, or improving documentation. Your support helps the community grow.
This project is licensed under the MIT License.
