Three steps. Thirty seconds.
Seriously, that's all it takes. No infra, no config files, no 40-page docs.
Sign up & get your key
Create an account and get your API key instantly. One key unlocks every model — Claude Opus, Sonnet, Haiku + GPT-5, GPT-5.4, Codex. No separate keys, no permissions to configure.
~10 secondsSet one environment variable
Point your tool to our endpoint. That's literally one line in your shell config. Every Anthropic-compatible client picks it up automatically.
~10 secondsCode like normal
Open Conductor, OpenCode, Claude Code CLI, Codex Desktop, Cursor, OpenClaw — whatever you use. It just works. No changes to your workflow, no new tools to learn.
~10 secondsOne line. That's it.
Add this to your shell profile (~/.zshrc, ~/.bashrc, etc.):
export ANTHROPIC_BASE_URL=https://openlimits.app
export ANTHROPIC_API_KEY=your-key-hereThen use Claude Code CLI, Conductor, Cursor, or any Anthropic-compatible client as normal. For full API documentation, see our API Docs.
Setup by tool
The shell env vars above work for most tools. Here are tool-specific instructions if you need them.
Claude Code CLI
Add the two env vars to your shell profile and restart your terminal. Every claude session will route through OpenLimits. Extended thinking, streaming, tool use — everything works exactly as before.
Conductor
Conductor automatically uses your Claude Code CLI settings if installed. Otherwise, go to Settings → Env and set ANTHROPIC_BASE_URL and ANTHROPIC_API_KEY.
Cursor
In Cursor, add OpenLimits as an API provider. Set the base URL to https://openlimits.app and paste your API key. All Claude models will be available.
Python SDK
Pass base_url="https://openlimits.app" to anthropic.Anthropic(). Or set the env vars — the SDK picks them up automatically. Works with the OpenAI SDK too via /v1/chat/completions.
TypeScript SDK
Pass baseURL: "https://openlimits.app" to the Anthropic constructor. Same as Python — env vars work too. Full streaming and extended thinking support.
cURL / Direct API
Replace api.anthropic.com with openlimits.app in your requests. Use x-api-key or Authorization: Bearer header. That's it.
What happens when you send a request
The proxy is transparent. Every request is routed through our own enterprise API accounts — purchased directly from Anthropic and OpenAI. Here's what happens in the ~50ms before your request reaches them.
Edge receives your request
Your request hits our nearest Cloudflare Workers edge node. We authenticate your API key and validate the request format.
Best provider selected
We pick the provider account with the lowest utilization. If you've been using the same account recently, we stick with it for consistency (sticky sessions).
Proxied & streamed back
Your request is forwarded to Anthropic or OpenAI (depending on the model) with the provider's credentials. The response streams back to you in real-time. If the provider errors, we automatically retry with another.
Works with your existing setup
OpenLimits is a drop-in replacement for the Anthropic API. You don't need to change any code, install any packages, or modify your prompts. Your existing tools, scripts, and workflows all work the same — just with no rate limits.
Real Claude + Real Codex — same APIs, same models, same everything
Claude requests go to Anthropic. Codex/GPT requests go to OpenAI. We don't use different models, fine-tunes, or any third-party alternatives. The request and response formats are identical — your code doesn't know or care that it's going through OpenLimits. Streaming, extended thinking, tool use, PDFs, images — it all works.
No vendor lock-in
Remove the env var and you're back to direct Anthropic API. No proprietary SDK, no special format. You can switch back anytime.
Native Codex + Claude endpoint
/v1/chat/completions routes GPT/Codex models directly to OpenAI and translates Claude model aliases automatically. One endpoint, both providers.