

Power your next AI agent with GLM-4.7, where code, reasoning, and creativity converge at scale.
GLM-4.7 represents Zhipu AI's latest advancement in large language models, prioritizing agentic coding, stable multi-step reasoning, and complex workflows. Released in December 2025, it supports a 200K token context window and up to 128K output tokens, enabling robust handling of extended tasks.
GLM-4.7 leads open-source models in coding benchmarks at 128K context, achieving 73.8% on SWE-bench Verified (up 5.8% from GLM-4.6) and 84.9% on LiveCodeBench v6, surpassing Claude Sonnet 4.5. Reasoning scores include 42.8% on HLE with tools (41% gain over predecessor) and 95.7% on AIME 2025.
GLM-4.7 excels in programming with a "think before acting" mechanism, improving multi-language code generation and terminal agent performance. It integrates function calling, structured JSON outputs, context caching, and real-time streaming for seamless developer workflows.
Seamlessly writes, debugs, and deploys code across languages—ideal for terminal-based and IDE-integrated workflows.
Generates modern, aesthetically refined HTML/CSS/JS and presentation slides with accurate layout, spacing, and design sensibility.
Creates high-fidelity PPTs, posters, technical documentation, and narrative content with consistent tone and structure.
Excels at synthesizing information from long contexts, extracting insights, and generating evidence-backed summaries.
GLM-4.7 powers agentic coding by autonomously handling requirement decomposition, multi-stack integration, and executable frameworks for prototypes. Developers use it for multimodal apps with real-time controls and gesture recognition, accelerating from concept to deployment.
vs. GLM-4.6: GLM-4.7 boosts coding stability with 73.8% on SWE-bench Verified (up 5.8%) and 41% on Terminal Bench 2.0 (up 16.5%), plus 41% HLE gain via preserved thinking.
vs. Claude Sonnet 4.5: Matches coding prowess at 84.9% LiveCodeBench v6 (outpacing it) and leads open-source on SWE-bench, but trails slightly in Terminal Bench; excels in tool use at 87.4% τ²-Bench.
vs. GPT-5.1: Competitive at 84.9% LiveCodeBench v6 and 73.8% SWE-bench Verified, with superior open-source accessibility and 3x speed in real tests