HONG KONG — Alibaba (09988) unveiled its latest series of AI models, Qwen3, early Tuesday, positioning the launch as a direct challenge to leading models from OpenAI and Google. The company claims the new Qwen3 models rival or outperform top-tier AI benchmarks in areas like mathematics, programming, and reasoning.
The Qwen3 lineup includes two MoE (Mixture of Experts) models and six Dense models, with parameter sizes ranging from 600 million to 235 billion. The flagship model, Qwen3-235B-A22B, reportedly delivers competitive results against models such as DeepSeek-R1, OpenAI’s o1 and o3-mini, Elon Musk’s Grok 3, and Google’s Gemini 2.5 Pro in multiple performance benchmarks.
One highlight is the compact MoE model Qwen3-30B-A3B, which delivers superior performance with just 10% of QwQ-32B’s active parameters. The Qwen3-4B model also matches or surpasses the older Qwen2.5-72B-Instruct. Alibaba has open-sourced all six Dense models — Qwen3-32B, Qwen3-14B, Qwen3-8B, Qwen3-4B, Qwen3-1.7B, and Qwen3-0.6B — under the Apache 2.0 license.
The Qwen3 series features two reasoning modes: a ‘thinking mode’ for step-by-step reasoning in complex tasks, and a ‘non-thinking mode’ for fast responses suited to real-time applications. According to Alibaba, Qwen3 now delivers state-of-the-art (SOTA) reasoning capabilities in mathematics, coding, and logic.