back

Alibaba releases Qwen3.6-27B, a 27B dense open-weight model outperforming its 397B MoE on coding

2026-04-22 19:05

Alibaba's Qwen team released Qwen3.6-27B on April 22 under Apache 2.0—a 27-billion-parameter dense model with a 262,144-token native context window (extensible to 1,010,000 with YaRN) and hybrid Gated DeltaNet/Gated Attention architecture. Despite weighing roughly 55 GB versus the prior flagship's 807 GB, it outperforms the Qwen3.5-397B-A17B MoE on every major agentic coding benchmark, posting 77.2% on SWE-bench Verified, 86.2% on MMLU-Pro, and 87.8% on GPQA Diamond. It is available on Hugging Face and ModelScope in BF16 and FP8 variants.

Citations