Tencent Hunyuan Releases Open-Source 7B Model, Beats o1-Mini On Many Benchmarks

The top US labs are all looking to build the biggest training clusters to train the biggest models, but China is quietly innovating on much smaller models — and delivering some impressive results.

Tencent has released four new “compact” Hunyuan models. These models are with 0.5B, 1.8B, 4B and 7B parameters. The models are open-source and have a 256k context window. In self-reported benchmarks, Hunyuan says that its 7B model is comparable to OpenAI’s o1-mini model, which is OpenAI’s own small model.

“We’re expanding the Tencent Hunyuan open-source LLM ecosystem with four compact models (0.5B, 1.8B, 4B, 7B)! Designed for low-power scenarios like consumer-grade GPUs, smart vehicles, smart home devices, mobile phones, and PCs, these models support cost-effective fine-tuning for vertical applications, empowering developers and enterprises with a broader selection for diverse use cases,” Hunyuan posted on X.

“Each of the four models only requires a single card for deployment and integrate directly into PCs, phones, and tablets, supporting mainstream frameworks like SGLang, vLLM, and TensorRT-LLM,” it added.

On several benchmarks including GPQA, MATH500, AIME 2024, AIME 2025, Livecodebench v5 and v6, Hunyuan’s 7B model performed better than OpenAI’s o1-mini model. OpenAI hasn’t revealed how many parameters its o1-mini model has, but it’s estimated to have over 100B parameters.

Tencent Hunyuan 7B model benchmarks

As expected, the benchmarks results improved with the number of parameters in the other three models.

Tencent Hunyuan 0.5B, 1.8B, 4B benchmarks

These are impressive results, and yet another sign that China seems to be taking a nearly unassailable lead in the open-source model race. While US labs are focusing on creating large models, China seems to have gotten very good at creating smaller models that are not only cheaper, but also open-source. In addition, these models can be run on personal GPUs and in some cases, PCs and laptops. It might still be early days, but Chinese AI is managing to create a niche for itself in smaller, open-source models in which it has no real competition from US labs.

Posted in AI