---
language:
- tr
- ar
- af
- az
- es
- en
- el
- ro
- ru
- rm
- th
- uk
- uz
- pl
- pt
- fa
- sk
- sl
- da
- de
- nl
- fr
- fi
- ka
- hi
- hu
- hy
- ja
- kk
- kn
- ko
- ku
- ky
- la
- lb
- id
- is
- it
- zh
- cs
- vi
- be
- bg
- bs
- ne
- mn
license: mit
tags:
- turkish
- tΓΌrkiye
- english
- ai
- lamapi
- gemma3
- next
- next-x1
- efficient
- text-generation
- open-source
- 1b
- huggingface
- large-language-model
- llm
- causal
- transformer
- artificial-intelligence
- machine-learning
- ai-research
- natural-language-processing
- nlp
- finetuned
- lightweight
- creative
- summarization
- question-answering
- chat-model
- generative-ai
- optimized-model
- unsloth
- trl
- sft
- chemistry
- biology
- finance
- legal
- music
- art
- code
- climate
- medical
- agent
- text-generation-inference
pipeline_tag: text-generation
datasets:
- mlabonne/FineTome-100k
- ITCL/FineTomeOs
- Gryphe/ChatGPT-4o-Writing-Prompts
- dongguanting/ARPO-SFT-54K
- GreenerPastures/All-Your-Base-Full
- Gryphe/Opus-WritingPrompts
- HuggingFaceH4/MATH-500
- mlabonne/smoltalk-flat
- mlabonne/natural_reasoning-formatted
- OpenSPG/KAG-Thinker-training-dataset
- uclanlp/Brief-Pro
- CognitiveKernel/CognitiveKernel-Pro-SFT
- SuperbEmphasis/Claude-4.0-DeepSeek-R1-RP-SFWish
- QuixiAI/dolphin-r1
- mlabonne/lmsys-arena-human-sft-55k
library_name: transformers
---
# π Next-1B (t416)
### *Lightweight, Efficient, and TΓΌrkiye-Focused AI*
[](https://opensource.org/licenses/MIT)
[]()
[](https://huggingface.co/Lamapi/next-1b)
---
## π Overview
**Next-1B** is a **1-billion parameter causal language model** based on **Gemma 3**, designed for **efficiency, low-resource deployment, and reasoning-focused natural language understanding**.
Key highlights:
* Extremely **lightweight** β can run on consumer GPUs with low VRAM.
* Optimized for **text reasoning, summarization, and creative generation**.
* Supports **Turkish natively** while remaining multilingual.
* Open-source and transparent for research and applications.
Ideal for **developers, students, and organizations** needing **fast, reliable, and low-resource text-generation**.
---
# Our Next 1B and Next 4B models are leading to all of the tiny models in benchmarks.
| Model |
MMLU (5-shot) % |
MMLU-Pro % |
GSM8K % |
MATH % |
| Next 4B preview |
84.6 |
66.9 |
82.7 |
70.5 |
| Next 1B Version t327 |
87.3 |
69.2 |
90.5 |
70.1 |
| Qwen 3 0.6B |
52.81 |
37.6 |
60.7 |
20.5 |
| Llama 3.2 1B |
49.3 |
44.4 |
11.9 |
30.6 |
---
# Also, our Next 14b model is leading to state-of-the-art models in some of the Benchmarks.
| Model |
MMLU (5-shot) % |
MMLU-Pro % |
GSM8K % |
MATH % |
| Next 14B (Thinking) |
94.6 |
93.2 |
98.8 |
92.7 |
| Next 12B |
92.7 |
84.4 |
95.3 |
87.2 |
| GPT-5 |
92.5 |
87.0 |
98.4 |
96.0 |
| Claude Opus 4.1 (Thinking) |
~92.0 |
87.8 |
84.7 |
95.4 |
---
## π― Goals
1. **Lightweight Efficiency:** Run smoothly on low-resource devices.
2. **Reasoning-Focused:** Provide logical and coherent text outputs.
3. **Accessibility:** Fully open-source with clear documentation.
4. **Multilingual Adaptability:** Turkish-focused but supports other languages.
---
## β¨ Key Features
| Feature | Description |
| --------------------------- | --------------------------------------------------------------------- |
| π Lightweight Architecture | Optimized for low VRAM usage; ideal for small GPUs or CPU deployment. |
| πΉπ· Turkish & Multilingual | Handles complex Turkish prompts accurately. |
| π§ Reasoning Capabilities | Logical chain-of-thought for question-answering and problem-solving. |
| π Consistent Outputs | Reliable and reproducible results across multiple runs. |
| π Open Source | Transparent, research-friendly, and community-driven. |
---
## π Model Specifications
| Specification | Details |
| ------------------ | ---------------------------------------------------------------------- |
| Base Model | Gemma 3 |
| Parameter Count | 1 Billion |
| Architecture | Transformer, causal LLM |
| Fine-Tuning Method | Instruction fine-tuning (SFT) with Turkish and multilingual datasets |
| Optimizations | Quantization-ready (q8, f16, f32) |
| Use Cases | Text generation, summarization, Q&A, creative writing, reasoning tasks |
---
## π Installation & Usage
### Use the model:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "Lamapi/next-1b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# Chat message
messages = [
{"role": "system", "content": "You are Next-X1, a smart and concise AI assistant trained by Lamapi. Always respond in the user's language. Proudly made in Turkey."},
{"role": "user", "content": "Hello, how are you?"}
]
# Prepare input with Tokenizer
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt")
# Output from the model
output = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
Hello, how are you?
I'm fine, thank you. How are you?
---
## π License
MIT License β free to use, modify, and distribute. Attribution appreciated.
---
## π Contact & Support
* π§ **Email:** [lamapicontact@gmail.com](mailto:lamapicontact@gmail.com)
* π€ **HuggingFace:** [Lamapi](https://huggingface.co/Lamapi)
---
> **Next-1B** β Lightweight, **efficient, and reasoning-focused**, bringing **Turkeyβs AI forward** on low-resource hardware.
[](https://huggingface.co/Lamapi)