DavidAU/Llama3.3-8B-Instruct-Thinking-Claude-4.5-Opus-High-Reasoning Text Generation • 8B • Updated Jan 6 • 4.96k • 94
DavidAU/Llama3.3-8B-Instruct-Thinking-Heretic-Uncensored-Claude-4.5-Opus-High-Reasoning Text Generation • 8B • Updated Jan 6 • 1.28k • 25
midorin-Linux/gpt-oss-20b-Coding-Distill-GGUF Text Generation • 21B • Updated 26 days ago • 1.52k • 6
LLM: MoEs Collection GGUFs, conventional and k-quants – both without imatrix. This should be faster for CPU inference. Right now DeepSee MoEs (Mixture of Experts) • 21 items • Updated 5 days ago
LLM: MoEs Collection GGUFs, conventional and k-quants – both without imatrix. This should be faster for CPU inference. Right now DeepSee MoEs (Mixture of Experts) • 21 items • Updated 5 days ago
LLM: MoEs Collection GGUFs, conventional and k-quants – both without imatrix. This should be faster for CPU inference. Right now DeepSee MoEs (Mixture of Experts) • 21 items • Updated 5 days ago
LLM: MoEs Collection GGUFs, conventional and k-quants – both without imatrix. This should be faster for CPU inference. Right now DeepSee MoEs (Mixture of Experts) • 21 items • Updated 5 days ago