

Designed for complex workflows, agentic automation, and long-context reasoning, K2.5 transforms research, document processing, and software development.
It is designed to read images and text together, orchestrate up to 100 cooperative sub‑agents, and automate end‑to‑end workflows such as design‑to‑code, research pipelines, and multi‑step business tasks.
Kimi K2.5 runs inside the broader Kimi AI ecosystem, alongside products like Kimi Researcher, Kimi Dev, and Kimi for Docs/Slides/Sheets, giving teams a unified workspace for content, data, and code.
Kimi K2.5 is built on top of Kimi K2, Moonshot’s flagship MoE (Mixture‑of‑Experts) model. Kimi K2 uses a 1‑trillion‑parameter MoE architecture with about 32 billion active parameters per inference, balancing capacity with efficiency. This foundation gives K2.5 strong general reasoning, robust coding skills, and competitive performance on benchmarks like MMLU, GSM8K, and HumanEval.
K2.5 introduces “Visual Agentic Intelligence,” enabling the model to understand images (such as UI screenshots) and act as an autonomous agent that plans, executes, and refines multi‑step tasks. It can interpret layouts, map them to structured code, and coordinate sub‑agents to deliver production‑ready UI components and workflows.
Within Kimi AI, the K2 line is known for an extended context window up to 128,000 tokens across products and interfaces, supporting complex documents, codebases, and long conversations. The broader Kimi AI platform advertises “ultra‑long context” (over 2 million tokens) for certain experiences, enabling deep multi‑document and conversational understanding.
Kimi K2, the foundation for K2.5, delivers top‑tier scores on several public benchmarks, performing competitively with leading frontier models.

It is designed to read images and text together, orchestrate up to 100 cooperative sub‑agents, and automate end‑to‑end workflows such as design‑to‑code, research pipelines, and multi‑step business tasks.
Kimi K2.5 runs inside the broader Kimi AI ecosystem, alongside products like Kimi Researcher, Kimi Dev, and Kimi for Docs/Slides/Sheets, giving teams a unified workspace for content, data, and code.
Kimi K2.5 is built on top of Kimi K2, Moonshot’s flagship MoE (Mixture‑of‑Experts) model. Kimi K2 uses a 1‑trillion‑parameter MoE architecture with about 32 billion active parameters per inference, balancing capacity with efficiency. This foundation gives K2.5 strong general reasoning, robust coding skills, and competitive performance on benchmarks like MMLU, GSM8K, and HumanEval.
K2.5 introduces “Visual Agentic Intelligence,” enabling the model to understand images (such as UI screenshots) and act as an autonomous agent that plans, executes, and refines multi‑step tasks. It can interpret layouts, map them to structured code, and coordinate sub‑agents to deliver production‑ready UI components and workflows.
Within Kimi AI, the K2 line is known for an extended context window up to 128,000 tokens across products and interfaces, supporting complex documents, codebases, and long conversations. The broader Kimi AI platform advertises “ultra‑long context” (over 2 million tokens) for certain experiences, enabling deep multi‑document and conversational understanding.
Kimi K2, the foundation for K2.5, delivers top‑tier scores on several public benchmarks, performing competitively with leading frontier models.
