- Luau 92.5%
- Shell 3.4%
- Nix 3%
- Dockerfile 1.1%
| .forgejo/workflows | ||
| .github | ||
| .vscode | ||
| docs | ||
| environments | ||
| scripts | ||
| src | ||
| tests | ||
| .clinerules | ||
| .dockerignore | ||
| .editorconfig | ||
| .gitignore | ||
| .luaurc | ||
| bleumoon.lock | ||
| bleumoon.toml | ||
| CHANGELOG.md | ||
| docker-compose.yaml | ||
| Dockerfile | ||
| flake.lock | ||
| flake.nix | ||
| inspect_net_v2.luau | ||
| LICENSE | ||
| README.md | ||
| stylua.toml | ||
| test_json.luau | ||
| test_server_debug.luau | ||
free99.ai
free99.ai is a high-throughput, OpenAI-compatible proxy server designed to maximize the utility of free-tier AI APIs (like Gemini) while providing seamless fallback to paid providers. It features advanced IP rotation and tiered key management to ensure high availability.
Key Features
- OpenAI Compatibility: Drop-in replacement for OpenAI-compatible clients (like Roo Code).
- Multi-IP Rotation: Rotates outgoing requests through multiple static IP addresses via Nginx egress proxies to minimize rate limiting.
- Tiered Key Rotation: Prioritizes "Free" tier keys to minimize costs, falling back to "Paid" tiers only when necessary.
- Provider Support:
- Gemini (Google AI Studio)
- Vertex AI (Google Cloud)
- OpenRouter
- Automatic Cooldown: Automatically manages rate limits (429 errors) by putting keys into cooldown and retrying with the next available key.
Quick Start
Local Testing (Single IP)
The proxy works perfectly on a local machine with a single IP address. In this mode, it will make direct requests to the AI providers without using egress proxies.
- Using Nix:
nix develop bleumoon pkg install GEMINI_KEYS=your_key:free bleumoon run src/main.luau - Using Docker:
docker build -t free99 . docker run -p 8080:8080 -e GEMINI_KEYS=your_key:free free99
Production Deployment (Multi-IP)
For production, use Docker Compose to manage the proxy and its environment.
- Update the
environmentsection indocker-compose.yamlwith your API keys and proxy endpoints. - Run:
docker-compose up -d
Configuration
The proxy can be configured via environment variables or a proxy.toml file. Environment variables take precedence.
Environment Variables
You can configure multiple providers simultaneously by setting their respective key variables.
| Variable | Description | Example |
|---|---|---|
PROXY_PORT |
Port to listen on | 8080 |
PROXY_HOST |
Host to bind to | 0.0.0.0 |
GEMINI_KEYS |
Comma-separated Gemini keys (key:tier) |
k1:free,k2:paid |
VERTEX_KEYS |
Comma-separated Vertex keys | proj:reg:tok:free |
OPENROUTER_KEYS |
Comma-separated OpenRouter keys | sk-or-v1...:paid |
PROXY_PROXIES |
JSON array of proxy objects (leave empty for single-IP) | [] |
PROXY_KEY_STRATEGY |
Key rotation strategy | round-robin |
PROXY_COOLDOWN_PERIOD |
Cooldown in seconds | 60 |
Multi-Provider Example (Docker Compose)
services:
free99-proxy:
image: git.ds.reinitialized.net/your-user/free99.ai:latest
environment:
- GEMINI_KEYS=free_key_1:free,paid_key_1:paid
- OPENROUTER_KEYS=sk-or-v1-your-key:paid
- PROXY_PROXIES=[{"url":"http://127.0.0.1:8081","label":"IP1"}]
Example proxy.toml
[server]
port = 8080
host = "0.0.0.0"
[keyRotation]
key_strategy = "round-robin"
proxy_strategy = "round-robin"
cooldown_period = 60
# Define Nginx egress proxies for IP rotation
[[proxies]]
url = "http://127.0.0.1:8081"
label = "IP-1"
[[proxies]]
url = "http://127.0.0.1:8082"
label = "IP-2"
# Configure providers and keys
[[providers]]
name = "gemini"
[[providers.keys]]
key = "AIzaSyA..."
tier = "free"
[[providers.keys]]
key = "AIzaSyB..."
tier = "paid"
[[providers]]
name = "openrouter"
[[providers.keys]]
key = "sk-or-..."
tier = "paid"
# Map OpenAI model IDs to provider-specific models
[[models]]
id = "gpt-4o"
provider = "gemini"
remote_model = "gemini-1.5-pro"
Documentation
- Usage Guide: Detailed setup for Nginx, API keys, and Roo Code.
- Architecture: Deep dive into the tiered system and IP rotation strategy.
License
This project is licensed under the MIT License.