We found loading with quantized like Q4, Q3, Q2, .. has performance degradation and repetitions.

Mini-Coder-v2 (15B)

Mini-Coder-v2 is build on top of upscaled & merged of Qwen3.5-9B + Qwen3.5-9B-Base model with Continual Pretraining (CPT), we feed ~36.63k high-quality curated luau raw codes & docs text to improves the luau coding tasks & knowledges capability.

This model present a upscaled of Qwen3.5-9B from only 32 layers to 56 layers for a deeper reasoning capability! This model parameters is 15.07B with Vision, 14.60B without Vision.

Uploaded finetuned model

  • Developed by: khtsly
  • License: apache-2.0
  • Finetuned from model : khtsly/Mini-Coder-v2-noft

This qwen3_5 model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
-
GGUF
Model size
14B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for khtsly/Mini-Coder-v2-GGUF

Finetuned
Qwen/Qwen3.5-9B
Adapter
(81)
this model

Datasets used to train khtsly/Mini-Coder-v2-GGUF