Qwen3-4B-Instruct-2507-Math-Fr
This model is obtained by merging SamsungSAILMontreal/Qwen3-4B-Instruct-2507-Math, SamsungSAILMontreal/Qwen3-4B-Instruct-2507-Code and SamsungSAILMontreal/Qwen3-4B-Instruct-2507-Fr. The model is used in the experiments described in https://bknyaz.github.io/blog/2026/meta-merge/. Single A100 was used for merging and evaluation.
The following versions were used for merge/eval:
- python >= 3.10
- torch : 2.9.0+cu128
- lm_eval : 0.4.9.1
- vllm : 0.11.1
- transformers : 4.57.6
- datasets : 3.2.0
- numpy : 2.2.6
Merging
Merging was done using parameter averaging implemented in merge_qwen.py.
Evaluation
Evaluation was done with lm_eval on the test split of gsm8k, french_bench (avg score), gsm8k-fr and humaneval (instruct):
python -m lm_eval --model vllm --model_args pretrained=${model},tensor_parallel_size=1,dtype=auto,gpu_memory_utilization=0.9,data_parallel_size=1 \
--tasks gsm8k,french_bench,gsm8k-fr,humaneval_instruct --batch_size 1 --apply_chat_template=True --confirm_run_unsafe_code --trust_remote_code
To evaluate on gsm8k-fr you can use our fork https://github.com/bknyaz/lm-evaluation-harness/tree/main/lm_eval/tasks/gsm8k.
Results
| Model | gsm8k | french | gsm8k-fr | humaneval_instruct | avg |
|---|---|---|---|---|---|
| Qwen3-4B-Instruct-2507 | 80.4 | 43.1 | 66.0 | 90.2 | 69.9 |
| Qwen3-4B-Instruct-2507-Math | 76.8 | 43.0 | 65.3 | 72.0 | 64.3 |
| Qwen3-4B-Instruct-2507-Fr | 72.3 | 45.7 | 60.7 | 74.4 | 63.3 |
| Qwen3-4B-Instruct-2507-Code | 72.5 | 45.4 | 53.0 | 76.2 | 61.8 |
| Qwen3-4B-Instruct-2507-Math-Code-Fr | 82.9 | 45.8 | 69.8 | 79.9 | 69.6 |
License
Please refer to the license of the base models SamsungSAILMontreal/Qwen3-4B-Instruct-2507-Math, SamsungSAILMontreal/Qwen3-4B-Instruct-2507-Code and SamsungSAILMontreal/Qwen3-4B-Instruct-2507-Fr.
- Downloads last month
- 19