Z.ai (formerly Zhipu AI) released GLM-5.1 on April 7 — a 754-billion-parameter Mixture-of-Experts model (roughly 40B active parameters per token) under an MIT license, with a 200K-token context window and up to 131K output tokens. It scored 58.4% on SWE-Bench Pro, edging GPT-5.4 (57.7%) and Claude Opus 4.6 (57.3%), and is explicitly engineered for long-horizon agentic coding: the model can sustain hundreds of iterations and thousands of tool calls without strategy drift, with the company documenting tasks running up to eight hours. Weights are available on Hugging Face and ModelScope, with inference support via vLLM and SGLang. On general reasoning and knowledge tasks it trails Google and OpenAI's top models.