view post Post 5074 We collaborated with Hugging Face to enable you to train MoE models 12× faster with 35% less VRAM via our new Triton kernels (no accuracy loss). 🤗Train gpt-oss locally on 12.8GB VRAM with our free notebooks: https://unsloth.ai/docs/new/faster-moe See translation 1 reply · 🔥 29 29 🤗 5 5 + Reply
view post Post 3639 We created a tool-calling guide for local LLMs!Learn how to use any open model like Qwen3-Coder-Next and GLM-4.7-Flash for function calling.Guide: https://unsloth.ai/docs/basics/tool-calling-guide-for-local-llmsWe provide hands-on examples for: story writing, Python execution, terminal tool calls, maths and more. See translation 7 replies · ❤️ 17 17 + Reply