Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
253.3
TFLOPS
87
9
Mukul
mtcl
Follow
Gargaz's profile picture
varemu's profile picture
AlexGS74's profile picture
5 followers
ยท
22 following
mtcl
mtcl
AI & ML interests
None yet
Recent Activity
new
activity
about 19 hours ago
Intel/DeepSeek-V4-Flash-W4A16-AutoRound:
Can I deploy it with sglang at my 8*4090 ubuntu sever?
new
activity
about 24 hours ago
nvidia/MiniMax-M2.7-NVFP4:
Context Length for 2X6000 Pros (2x96 = 192GB VRAM)
new
activity
4 days ago
unsloth/DeepSeek-V4-Flash:
Worse than (smaller) MiniMax M2.7??
View all activity
Organizations
None yet
mtcl
's activity
All
Models
Datasets
Spaces
Buckets
Papers
Collections
Community
Posts
Upvotes
Likes
Articles
New activity in
Intel/DeepSeek-V4-Flash-W4A16-AutoRound
about 19 hours ago
Can I deploy it with sglang at my 8*4090 ubuntu sever?
3
#1 opened 1 day ago by
marshal007
New activity in
nvidia/MiniMax-M2.7-NVFP4
about 24 hours ago
Context Length for 2X6000 Pros (2x96 = 192GB VRAM)
2
#2 opened 4 days ago by
mtcl
New activity in
unsloth/DeepSeek-V4-Flash
4 days ago
Worse than (smaller) MiniMax M2.7??
13
#2 opened 4 days ago by
deleted
New activity in
ubergarm/Kimi-K2.6-GGUF
4 days ago
really awesome speeds! running at 256k context.
๐ฅ
1
5
#11 opened 5 days ago by
mtcl
New activity in
Qwen/Qwen3.6-27B
4 days ago
MOE 122b and 397b please!
๐
16
8
#7 opened 6 days ago by
jesleocizi
New activity in
ubergarm/Kimi-K2.6-GGUF
5 days ago
How to disable thinking?
4
#9 opened 5 days ago by
Hansi2024
New activity in
demon-zombie/MiniMax-M2.7-AWQ-4bit
7 days ago
These are NOT actual AWQ-quantized models.
2
#1 opened 14 days ago by
cai-cai
New activity in
NinjaBoffin/MiniMax-M2.7-NVFP4
7 days ago
max context
#2 opened 7 days ago by
mtcl
New activity in
ubergarm/Kimi-K2.6-GGUF
7 days ago
No think tags.
10
#4 opened 7 days ago by
DrRos
New activity in
nvidia/MiniMax-M2.5-NVFP4
9 days ago
Minimax M2.7 NVFP4
๐
๐ฅ
5
4
#4 opened 15 days ago by
mtcl
New activity in
lukealonso/MiniMax-M2.7-NVFP4
9 days ago
Unable to use full 192k context in SGLang with MiniMax-M2.7-NVFP4 (runtime capped at ~80,964 tokens)
3
#9 opened 9 days ago by
mtcl
New activity in
lukealonso/MiniMax-M2.7-NVFP4
12 days ago
w1 not matching w3 weight scales
12
#1 opened 16 days ago by
dareposte
New activity in
lukealonso/MiniMax-M2.7-NVFP4
15 days ago
tokenizer component mismatch and w1_weight_scale_2 must match w3_weight_scale_2. Accuracy may be affected issue
1
#5 opened 15 days ago by
mtcl
New activity in
MiniMaxAI/MiniMax-M2.7
17 days ago
Minimax 2.7 !!!!
๐
5
3
#3 opened 17 days ago by
mtcl
New activity in
MiniMaxAI/MiniMax-M2.5
18 days ago
Where is M2.7 ??
6
#58 opened 19 days ago by
mtcl
New activity in
ubergarm/GLM-5.1-GGUF
21 days ago
Fantastic as usual
โค๏ธ
5
5
#1 opened 21 days ago by
ndroidph
New activity in
google/gemma-4-31B-it
26 days ago
vllm / sglang support?
๐
11
7
#4 opened 26 days ago by
mtcl
liked
a model
29 days ago
nvidia/MiniMax-M2.5-NVFP4
Text Generation
โข
116B
โข
Updated
11 days ago
โข
70.5k
โข
33
New activity in
nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4
about 1 month ago
Searching for a new Tool Parser
3
#15 opened about 1 month ago by
LucasMM14
VLLM + MTP + NVFP4 doesn't work
๐
1
2
#16 opened about 1 month ago by
catplusplus
Load more