Instructions to use TechxGenus/Seed-Coder-8B-Instruct-DWQ with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- MLX
How to use TechxGenus/Seed-Coder-8B-Instruct-DWQ with MLX:
# Make sure mlx-lm is installed # pip install --upgrade mlx-lm # Generate text with mlx-lm from mlx_lm import load, generate model, tokenizer = load("TechxGenus/Seed-Coder-8B-Instruct-DWQ") prompt = "Write a story about Einstein" messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) text = generate(model, tokenizer, prompt=prompt, verbose=True) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
- MLX LM
How to use TechxGenus/Seed-Coder-8B-Instruct-DWQ with MLX LM:
Generate or start a chat session
# Install MLX LM uv tool install mlx-lm # Interactive chat REPL mlx_lm.chat --model "TechxGenus/Seed-Coder-8B-Instruct-DWQ"
Run an OpenAI-compatible server
# Install MLX LM uv tool install mlx-lm # Start the server mlx_lm.server --model "TechxGenus/Seed-Coder-8B-Instruct-DWQ" # Calling the OpenAI-compatible server with curl curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "TechxGenus/Seed-Coder-8B-Instruct-DWQ", "messages": [ {"role": "user", "content": "Hello"} ] }'
- Xet hash:
- 5c1fffb4e4a1f571ad7f37c329dcdd7d94c13185cd72480570f1975ea9ea5309
- Size of remote file:
- 11.9 MB
- SHA256:
- db6520146c388c495a98bbea62ff6d00c0a8935bed33622e33bb33ec71aaafed
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.