MiniMax: MiniMax M1
MiniMax-M1 is a large-scale, open-weight reasoning model designed for extended context and high-efficiency inference. It leverages a hybrid Mixture-of-Experts (MoE) architecture paired with a custom "lightning attention" mechanism, allowing it to process long sequences—up to 1 million tokens—while maintaining competitive FLOP efficiency. With 456 billion total parameters and 45.9B active per token, this variant is optimized for complex, multi-step reasoning tasks. Trained via a custom reinforcement learning pipeline (CISPO), M1 excels in long-context understanding, software engineering, agentic tool use, and mathematical reasoning. Benchmarks show strong performance across FullStackBench, SWE-bench, MATH, GPQA, and TAU-Bench, often outperforming other open models like DeepSeek R1 and Qwen3-235B.
OpenRouter 원가 (1M 토큰)
인보이스랩 (월 예상)
입력 0.5M + 출력 0.5M 기준
모델 정보
기본 정보
| 모델 ID | minimax/minimax-m1 |
| 제공사 | MiniMax |
| 컨텍스트 윈도우 | 1,000,000 토큰 |
| 모달리티 | text->text |
지원 기능
API 사용법
Python (OpenAI SDK 호환)
from openai import OpenAI
client = OpenAI(
api_key="your-dream-api-key",
base_url="https://api.invoicedream.co.kr/v1"
)
response = client.chat.completions.create(
model="minimax/minimax-m1",
messages=[
{"role": "user", "content": "안녕하세요"}
]
)
print(response.choices[0].message.content)Node.js / TypeScript
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: 'your-dream-api-key',
baseURL: 'https://api.invoicedream.co.kr/v1'
});
const response = await client.chat.completions.create({
model: 'minimax/minimax-m1',
messages: [{ role: 'user', content: '안녕하세요' }]
});
console.log(response.choices[0].message.content);cURL
curl https://api.invoicedream.co.kr/v1/chat/completions \
-H "Authorization: Bearer your-dream-api-key" \
-H "Content-Type: application/json" \
-d '{
"model": "minimax/minimax-m1",
"messages": [{"role": "user", "content": "안녕하세요"}]
}'💡 Tip: OpenAI SDK를 그대로 사용할 수 있습니다.base_url만 변경하면 됩니다!