by Alibaba
Alibaba launches Qwen3.5, a 397B-parameter open-weight MoE model (17B active). 60% cheaper, 8x better at large workloads than Qwen3.
[Alibaba](https://startupintros.com/orgs/alibaba) released Qwen3.5 on February 15, 2026 — a 397B-parameter open-weight mixture-of-experts (MoE) multimodal model with only 17B active parameters. The efficiency is remarkable: 60% cheaper and 8x better at large workloads than Qwen3, while ranking #3 in the Artificial Analysis Intelligence Index for open-weight models.
The MoE architecture is the story — 397B total parameters but only 17B active means near-frontier performance at a fraction of the inference cost. Alongside the open-weight model, Alibaba also launched Qwen3.5 Plus as a proprietary API.
For companies running AI at scale, this is a massive cost reduction. Open-source AI keeps getting scarier for closed-model companies like [OpenAI](https://startupintros.com/orgs/openai) and Anthropic.