API Reference
OpenLimits is a drop-in proxy for the Anthropic and OpenAI APIs. Point your client at our base URL, use your OpenLimits API key, and everything works — Claude models via /v1/messages, OpenAI Codex models via /v1/responses, and an OpenAI-compatible translation layer via /v1/chat/completions.
Quick Start
Set two environment variables and you're done:
export ANTHROPIC_BASE_URL=https://openlimits.app
export ANTHROPIC_API_KEY=your-key-hereThat's it. Any Anthropic-compatible client (Claude Code CLI, Conductor, OpenCode, Cursor, etc.) will route through OpenLimits automatically.
Authentication
All API requests require authentication via one of:
x-api-key: YOUR_KEYheader (Anthropic style)Authorization: Bearer YOUR_KEYheader (OpenAI style)
curl https://openlimits.app/v1/messages \
-H "x-api-key: YOUR_KEY" \
-H "content-type: application/json" \
-d '{"model":"claude-sonnet-4-6","max_tokens":1024,"messages":[{"role":"user","content":"Hello"}]}'POST /v1/messages
Proxies directly to the Anthropic Messages API. The request and response formats are identical — see Anthropic's docs for the full schema.
Request
POST /v1/messages
Content-Type: application/json
x-api-key: YOUR_KEY
{
"model": "claude-sonnet-4-6",
"max_tokens": 1024,
"messages": [
{ "role": "user", "content": "Explain recursion in one sentence." }
],
"stream": false
}Response
{
"id": "msg_...",
"type": "message",
"role": "assistant",
"content": [
{ "type": "text", "text": "Recursion is when a function calls itself..." }
],
"model": "claude-sonnet-4-6",
"usage": {
"input_tokens": 14,
"output_tokens": 32,
"cache_read_input_tokens": 0,
"cache_creation_input_tokens": 0
}
}Streaming
Set "stream": true to receive server-sent events (SSE). The event format follows the Anthropic streaming spec.
Effort Levels
Control quality vs speed with the effort parameter:
{
"model": "claude-sonnet-4-6",
"max_tokens": 1024,
"messages": [...],
"output_config": { "effort": "low" }
}Valid values: low, medium, high.
Extended Thinking
Supported via the anthropic-beta: interleaved-thinking-2025-05-14 header. See Anthropic's extended thinking docs.
POST /v1/responses
Proxies to the OpenAI Responses API for Codex models. This is the endpoint used by Codex CLI and Codex Desktop when configured with OpenLimits.
Request
POST /v1/responses
Content-Type: application/json
Authorization: Bearer YOUR_KEY
{
"model": "gpt-5-codex",
"instructions": "You are a helpful assistant.",
"input": "Explain recursion in one sentence.",
"stream": true,
"store": false
}Response
Returns a server-sent event (SSE) stream. Streaming is always enabled for this endpoint. The event format follows the OpenAI Responses API spec.
Supported Models
Only the GPT-5 Codex model family is supported via this endpoint:
gpt-5-codex,gpt-5.1-codex,gpt-5.2-codex,gpt-5.3-codexgpt-5-codex-mini,gpt-5.1-codex-mini
Notes
streamis always set totrue(required by the upstream provider)storeis always set tofalse(required by the upstream provider)- If
instructionsis not provided, a default is used
POST /v1/chat/completions
OpenAI-compatible endpoint. Send requests in the OpenAI format and we translate to Anthropic on the fly. Use this with any OpenAI SDK client.
Request
POST /v1/chat/completions
Content-Type: application/json
Authorization: Bearer YOUR_KEY
{
"model": "claude-sonnet-4-6",
"messages": [
{ "role": "system", "content": "You are a helpful assistant." },
{ "role": "user", "content": "Hello" }
],
"max_tokens": 1024,
"stream": false
}Response
{
"id": "chatcmpl-...",
"object": "chat.completion",
"created": 1700000000,
"model": "claude-sonnet-4-6",
"choices": [
{
"index": 0,
"message": { "role": "assistant", "content": "Hello! How can I help?" },
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 20,
"completion_tokens": 8,
"total_tokens": 28
}
}Model Routing
Claude model names are translated to the Anthropic Messages API format automatically. For OpenAI Codex models, use /v1/responses instead.
| Model | Routes To |
|---|---|
claude-opus-4-6, claude-sonnet-4-6, etc. | Anthropic (translated to Claude) |
GET /v1/models
Returns a list of all available models in OpenAI-compatible format.
GET /v1/models
x-api-key: YOUR_KEY
{
"object": "list",
"data": [
{ "id": "claude-opus-4-6", "object": "model", "owned_by": "anthropic" },
{ "id": "claude-sonnet-4-6", "object": "model", "owned_by": "anthropic" },
{ "id": "claude-haiku-4-5-20251001", "object": "model", "owned_by": "anthropic" },
{ "id": "claude-sonnet-4-5-20250929", "object": "model", "owned_by": "anthropic" },
{ "id": "claude-opus-4-5-20251101", "object": "model", "owned_by": "anthropic" },
...
]
}Supported Models
Claude (Anthropic)
Available via /v1/messages and /v1/chat/completions:
| Model | Family |
|---|---|
claude-opus-4-6 | Opus 4.6 (latest) |
claude-sonnet-4-6 | Sonnet 4.6 (latest) |
claude-haiku-4-5-20251001 | Haiku 4.5 |
claude-sonnet-4-5-20250929 | Sonnet 4.5 (dated) |
claude-opus-4-5-20251101 | Opus 4.5 (dated) |
claude-sonnet-4-20250514 | Sonnet 4.0 (dated) |
claude-opus-4-20250514 | Opus 4.0 (dated) |
claude-sonnet-4-5 | Sonnet 4.5 |
claude-opus-4-5 | Opus 4.5 |
claude-opus-4-1 | Opus 4.1 |
claude-sonnet-4-0 | Sonnet 4.0 |
claude-opus-4-0 | Opus 4.0 |
claude-haiku-4-5 | Haiku 4.5 (alias) |
Codex / GPT (OpenAI)
Available via /v1/responses — used by Codex CLI and Codex Desktop:
| Model | Family |
|---|---|
gpt-5-codex | GPT-5 Codex |
gpt-5.1-codex | GPT-5.1 Codex |
gpt-5.2-codex | GPT-5.2 Codex |
gpt-5.3-codex | GPT-5.3 Codex |
gpt-5-codex-mini | GPT-5 Codex Mini |
gpt-5.1-codex-mini | GPT-5.1 Codex Mini |
Errors
Errors follow the Anthropic error format:
{
"type": "error",
"error": {
"type": "authentication_error",
"message": "Invalid or disabled authentication token"
}
}| Status | Type | Meaning |
|---|---|---|
400 | invalid_request_error | Invalid model or malformed request |
401 | authentication_error | Missing or invalid API key |
403 | permission_error | Spend limit exceeded or token expired |
429 | rate_limit_error | All providers temporarily at capacity |
Client Setup Examples
Claude Code CLI
Run the automated setup script to configure Claude Code and Codex CLI in one step, or add manually to ~/.claude/settings.json:
{
"env": {
"ANTHROPIC_BASE_URL": "https://openlimits.app",
"ANTHROPIC_AUTH_TOKEN": "your-key-here"
}
}Conductor
Conductor automatically uses your Claude Code CLI settings (if installed). Otherwise, to force Conductor to use our API, set these environment variables:
Settings → Env
ANTHROPIC_BASE_URL=https://openlimits.app
ANTHROPIC_API_KEY=your-key-herePython (Anthropic SDK)
import anthropic
client = anthropic.Anthropic(
api_key="your-key-here",
base_url="https://openlimits.app",
)
message = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello"}],
)
print(message.content[0].text)Python (OpenAI SDK)
from openai import OpenAI
client = OpenAI(
api_key="your-key-here",
base_url="https://openlimits.app/v1",
)
response = client.chat.completions.create(
model="claude-sonnet-4-6",
messages=[{"role": "user", "content": "Hello"}],
)
print(response.choices[0].message.content)TypeScript (Anthropic SDK)
import Anthropic from "@anthropic-ai/sdk";
const client = new Anthropic({
apiKey: "your-key-here",
baseURL: "https://openlimits.app",
});
const message = await client.messages.create({
model: "claude-sonnet-4-6",
max_tokens: 1024,
messages: [{ role: "user", content: "Hello" }],
});
console.log(message.content[0].text);cURL
curl https://openlimits.app/v1/messages \
-H "x-api-key: your-key-here" \
-H "content-type: application/json" \
-d '{
"model": "claude-sonnet-4-6",
"max_tokens": 1024,
"messages": [{"role": "user", "content": "Hello"}]
}'Codex CLI / Desktop
OpenLimits works as a drop-in backend for Codex CLI and Codex Desktop. The setup script configures everything automatically, or you can set it up manually below.
Codex CLI (Recommended: config.toml)
Create or edit ~/.codex/config.toml with a custom provider:
# ~/.codex/config.toml
model = "gpt-5-codex"
model_provider = "openlimits"
[model_providers.openlimits]
name = "OpenLimits"
base_url = "https://openlimits.app/v1"
env_key = "OPENAI_API_KEY"
wire_api = "responses"Then set the API key in your shell profile (~/.zshrc or ~/.bashrc):
export OPENAI_API_KEY=your-key-hereImportant: Remove OPENAI_BASE_URL and CODEX_OPENAI_BASE_URL from your environment if set — they override the config.toml settings.
Codex Desktop
Open Codex Desktop settings and set:
| Setting | Value |
|---|---|
| Base URL | https://openlimits.app/v1 |
| API Key | your-key-here |
Model Selection
With OpenLimits, you can use GPT-5 Codex models in Codex CLI:
# GPT-5 Codex models
codex --model gpt-5-codex
codex --model gpt-5.2-codex
codex --model gpt-5.3-codex
codex --model gpt-5-codex-miniEnvironment Variables
A reference of all environment variables you can use to configure clients with OpenLimits.
Anthropic-compatible clients
For Claude Code CLI, Conductor, and any Anthropic SDK client:
| Variable | Value | Used by |
|---|---|---|
ANTHROPIC_BASE_URL | https://openlimits.app | Claude Code, Conductor, Anthropic SDKs |
ANTHROPIC_API_KEY | Your OpenLimits key | Claude Code, Conductor, Anthropic SDKs |
ANTHROPIC_AUTH_TOKEN | Your OpenLimits key | Claude Code (alternative to API_KEY) |
OpenAI-compatible clients
For Codex CLI/Desktop, Cursor, and any OpenAI SDK client:
| Variable | Value | Used by |
|---|---|---|
OPENAI_BASE_URL | https://openlimits.app/v1 | Codex CLI/Desktop, OpenAI SDKs |
OPENAI_API_KEY | Your OpenLimits key | Codex CLI/Desktop, OpenAI SDKs |
Note: Anthropic clients use https://openlimits.app (no /v1), while OpenAI clients use https://openlimits.app/v1 (with /v1). This is because the Anthropic SDK appends /v1/messages automatically, while the OpenAI SDK appends only the endpoint path.
Quick copy
# For Claude Code / Anthropic clients
export ANTHROPIC_BASE_URL=https://openlimits.app
export ANTHROPIC_API_KEY=your-key-here
# For Codex / OpenAI clients
export OPENAI_BASE_URL=https://openlimits.app/v1
export OPENAI_API_KEY=your-key-here