

MiniMax M1 is a 456B-parameter Mixture-of-Experts model optimized for ultra-long context (1M tokens) reasoning, outperforming peers in coding, math, and logic benchmarks via AIML API.
MiniMax M1 is an open-weight Mixture-of-Experts transformer with 456B total parameters and up to 1 million tokens of context. With 80K output capacity, it is purpose-built for massive input processing, logical analysis, and deep code reasoning. Ideal for RAG pipelines, legal and scientific workflows, and agentic tools.

Accessible via AI/ML API. Documentation: available here.
MiniMax M1 is an open-weight Mixture-of-Experts transformer with 456B total parameters and up to 1 million tokens of context. With 80K output capacity, it is purpose-built for massive input processing, logical analysis, and deep code reasoning. Ideal for RAG pipelines, legal and scientific workflows, and agentic tools.

Accessible via AI/ML API. Documentation: available here.