โ 295B total / 21B active / 256K context โ Fused fast-and-slow thinking in a single model โ First model trained on Hunyuan's rebuilt pretraining + RL infra (Feb โ Apr)
Benchmarks: ๐ SWE-Bench Verified, Terminal-Bench 2.0, BrowseComp, WideSearch โ competitive results, particularly strong on agentic tool use ๐ Top score on Tsinghua's 2026 Spring math PhD qualifying exam ๐ Strong context-learning and instruction-following on Tencent's CL-bench / CL-bench-Life