Auto Router
Your prompt will be processed by a meta-model and routed to one of dozens of models (see below), optimizing for the best possible output. To see which model was used, visit [Activity](/activity), or read the `model` attribute of the response. Your response will be priced at the same rate as the routed model. Learn more, including how to customize the models for routing, in our [docs](/docs/guides/routing/routers/auto-router). Requests will be routed to the following models: - [openai/gpt-5.2](/openai/gpt-5.2) - [openai/gpt-5.2-pro](/openai/gpt-5.2-pro) - [openai/gpt-5.1](/openai/gpt-5.1) - [openai/gpt-5](/openai/gpt-5) - [openai/gpt-5-mini](/openai/gpt-5-mini) - [openai/gpt-5-nano](/openai/gpt-5-nano) - [openai/gpt-4.1](/openai/gpt-4.1) - [openai/gpt-4.1-mini](/openai/gpt-4.1-mini) - [openai/gpt-4.1-nano](/openai/gpt-4.1-nano) - [openai/gpt-oss-120b](/openai/gpt-oss-120b) - [anthropic/claude-opus-4.5](/anthropic/claude-opus-4.5) - [anthropic/claude-sonnet-4.5](/anthropic/claude-sonnet-4.5) - [anthropic/claude-haiku-4.5](/anthropic/claude-haiku-4.5) - [google/gemini-3-pro-preview](/google/gemini-3-pro-preview) - [google/gemini-2.5-pro](/google/gemini-2.5-pro) - [google/gemini-2.5-flash](/google/gemini-2.5-flash) - [mistralai/mistral-large](/mistralai/mistral-large) - [mistralai/mistral-large-2407](/mistralai/mistral-large-2407) - [mistralai/mistral-large-2411](/mistralai/mistral-large-2411) - [mistralai/mistral-medium-3.1](/mistralai/mistral-medium-3.1) - [mistralai/mistral-nemo](/mistralai/mistral-nemo) - [mistralai/mistral-7b-instruct](/mistralai/mistral-7b-instruct) - [mistralai/mixtral-8x7b-instruct](/mistralai/mixtral-8x7b-instruct) - [mistralai/mixtral-8x22b-instruct](/mistralai/mixtral-8x22b-instruct) - [mistralai/codestral-2508](/mistralai/codestral-2508) - [x-ai/grok-4](/x-ai/grok-4) - [x-ai/grok-3](/x-ai/grok-3) - [x-ai/grok-3-mini](/x-ai/grok-3-mini) - [deepseek/deepseek-r1](/deepseek/deepseek-r1) - [meta-llama/llama-3.3-70b-instruct](/meta-llama/llama-3.3-70b-instruct) - [meta-llama/llama-3.1-405b-instruct](/meta-llama/llama-3.1-405b-instruct) - [meta-llama/llama-3.1-70b-instruct](/meta-llama/llama-3.1-70b-instruct) - [meta-llama/llama-3.1-8b-instruct](/meta-llama/llama-3.1-8b-instruct) - [meta-llama/llama-3-70b-instruct](/meta-llama/llama-3-70b-instruct) - [meta-llama/llama-3-8b-instruct](/meta-llama/llama-3-8b-instruct) - [qwen/qwen3-235b-a22b](/qwen/qwen3-235b-a22b) - [qwen/qwen3-32b](/qwen/qwen3-32b) - [qwen/qwen3-14b](/qwen/qwen3-14b) - [cohere/command-r-plus-08-2024](/cohere/command-r-plus-08-2024) - [cohere/command-r-08-2024](/cohere/command-r-08-2024) - [moonshotai/kimi-k2-thinking](/moonshotai/kimi-k2-thinking) - [perplexity/sonar](/perplexity/sonar)
OpenRouter 원가 (1M 토큰)
인보이스랩 (월 예상)
입력 0.5M + 출력 0.5M 기준
모델 정보
기본 정보
| 모델 ID | openrouter/auto |
| 제공사 | Openrouter |
| 컨텍스트 윈도우 | 2,000,000 토큰 |
| 모달리티 | text+image+file+audio+video->text+image |
지원 기능
API 사용법
Python (OpenAI SDK 호환)
from openai import OpenAI
client = OpenAI(
api_key="your-dream-api-key",
base_url="https://api.invoicedream.co.kr/v1"
)
response = client.chat.completions.create(
model="openrouter/auto",
messages=[
{"role": "user", "content": "안녕하세요"}
]
)
print(response.choices[0].message.content)Node.js / TypeScript
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: 'your-dream-api-key',
baseURL: 'https://api.invoicedream.co.kr/v1'
});
const response = await client.chat.completions.create({
model: 'openrouter/auto',
messages: [{ role: 'user', content: '안녕하세요' }]
});
console.log(response.choices[0].message.content);cURL
curl https://api.invoicedream.co.kr/v1/chat/completions \
-H "Authorization: Bearer your-dream-api-key" \
-H "Content-Type: application/json" \
-d '{
"model": "openrouter/auto",
"messages": [{"role": "user", "content": "안녕하세요"}]
}'💡 Tip: OpenAI SDK를 그대로 사용할 수 있습니다.base_url만 변경하면 됩니다!