π In development!
π phi-1.0-coder
Phi 1.0 Coder β fine-tuned from microsoft/Phi-3.5-mini-instruct on 50,000 Python examples from the ronantakizawa/github-top-code dataset.
This model is designed to generate clean, efficient Python code based on English instructions. It is not multilingual β it expects English prompts and produces code with English comments.
π Key Features
- Compact & Fast β only 3.8B parameters, runs on consumer GPUs (RTX 3060 and above).
- Specialized for Python β trained on realβworld Python files from openβsource repositories.
- Optimized with Unsloth β 2β3x faster training and lower memory usage thanks to Unsloth.
- Easy to use β supports standard ChatML conversation format.
π Benchmarks
β οΈ Values will be added after evaluation.
The originalPhi-3.5-mini-instructscores around 62.8% on HumanEval. This model aims to excel specifically in Python tasks.
| Benchmark | Pass@1 (%) |
|---|---|
| HumanEval | XX.X |
| MBPP | XX.X |
π§ How to Use
Installation
pip install transformers accelerate
Code Generation Example
from transformers import pipeline
# Load the model (this will take 1-2 minutes)
pipe = pipeline("text-generation", model="constructai/phi-1.0-coder")
# Testing with a simple request
prompt = "def is_prime(n):"
output = pipe(prompt, max_new_tokens=100, temperature=0.2)[0]['generated_text']
print(output)
π License
This model is released under the MIT license (same as the original Phi-3.5). You are free to use, modify, and distribute it, including for commercial purposes.
βοΈ Author
Created by constructai If you like this model, please like it and share your feedback!
- Downloads last month
- -
Dataset used to train constructai/phi-1.0-coder
Evaluation results
- Pass@1 on HumanEvalself-reported0.000
- Pass@1 on MBPPself-reported0.000