
.webp)
Multilingual MoE model for diverse tasks
Qwen Max 2025-09-23, developed by Alibaba, is a large-scale Mixture-of-Experts (MoE) language model optimized for chat, coding, and complex problem-solving tasks. It employs a Transformer-based MoE architecture with expert routing, activating a subset of parameters per token to balance performance and compute efficiency. Pre-trained on over 20 trillion tokens from diverse multilingual and domain-specific corpora, Qwen Max supports deep reasoning and advanced text generation.


The model is available on the AI/ML API platform as "Qwen Max 2025-09-23" .
Detailed API Documentation is available here.
Get Qwen Max 2025-09-23 API here.
Qwen Max 2025-09-23, developed by Alibaba, is a large-scale Mixture-of-Experts (MoE) language model optimized for chat, coding, and complex problem-solving tasks. It employs a Transformer-based MoE architecture with expert routing, activating a subset of parameters per token to balance performance and compute efficiency. Pre-trained on over 20 trillion tokens from diverse multilingual and domain-specific corpora, Qwen Max supports deep reasoning and advanced text generation.


The model is available on the AI/ML API platform as "Qwen Max 2025-09-23" .
Detailed API Documentation is available here.
Get Qwen Max 2025-09-23 API here.