Something is afoot in the land of Qwen

Published: (March 4, 2026 at 10:55 AM EST)
3 min read

Source: Hacker News

Recent developments at Alibaba’s Qwen team

Resignations and organizational changes

The news began with a tweet from Junyang Lin (@JustinLin610):

me stepping down. bye my beloved qwen.

Junyang Lin was the lead researcher building Qwen and a key figure in releasing the open‑weight models from 2024 onward. A possible trigger for his resignation was a re‑org at Alibaba that placed a newly hired researcher from Google’s Gemini team in charge of Qwen, though this detail has not been confirmed.

Further information appears in an article from 36kr.com (a credible Chinese technology news outlet established in 2010; see its Wikipedia entry here). The article, written in Chinese, includes the following translated excerpts:

At approximately 1:00 PM Beijing time on March 4, Tongyi Lab held an emergency All‑Hands meeting, where Alibaba Group CEO Wu Yongming frankly addressed Qianwen employees.
Twelve hours earlier (0:11 AM Beijing time on March 4), Lin Junyang, the technical lead for Alibaba’s Qwen Big Data Model, suddenly announced his resignation on X. Lin Junyang was a key figure in promoting Alibaba’s open‑source AI models and one of Alibaba’s youngest P10 employees. Many Qwen members were unable to accept the sudden departure of their team’s key figure.
“Given far fewer resources than competitors, Junyang’s leadership is one of the core factors in achieving today’s results,” multiple Qianwen members told 36Kr.
Regarding Lin Junyang’s whereabouts, no new conclusions were reached at the meeting. However, around 2 PM he posted again on his WeChat Moments, stating, “Brothers of Qwen, continue as originally planned, no problem,” without explicitly confirming whether he would return.

The piece also lists several other key members who have apparently resigned:

  • Binyuan Hui – Lead Qwen code development, principal of the Qwen‑Coder series, responsible for the entire agent training pipeline and recently involved in robotics research.
  • Bowen Yu – Lead Qwen post‑training research, graduate of the University of Chinese Academy of Sciences, leading development of the Qwen‑Instruct series.
  • Kaixin Li – Core contributor to Qwen 3.5/VL/Coder, PhD from the National University of Singapore.

In addition, many younger researchers resigned on the same day. The presence of Alibaba’s CEO at the emergency All‑Hands meeting suggests the company recognizes the significance of these resignations and may still retain some of the departing talent.

Qwen 3.5 is exceptional

This story hits particularly hard because the Qwen 3.5 models appear to be exceptionally good.

The scale of the new model family is impressive. It started with Qwen‑3.5‑397B‑A17B on February 17—an 807 GB model—and was followed by a flurry of smaller siblings in the 122 B, 35 B, 27 B, 9 B, 4 B, 2 B, and 0.8 B sizes (Hugging Face collection).

Positive feedback is emerging for the 27 B and 35 B models on coding tasks that still fit on a 32 GB/64 GB Mac. I have tried the 9 B, 4 B, and 2 B models and found them notably effective given their tiny footprints. The 2 B model is just 4.57 GB—or as small as 1.27 GB when quantized—and is a full reasoning and multimodal (vision) model.

It would be a real tragedy if the Qwen team were to disband now, given their proven track record of extracting high‑quality results from ever‑smaller models. If those core members start new projects or join other research labs, I’m excited to see what they do next.

0 views
Back to Blog

Related posts

Read more »