Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Website
Tasks
HuggingChat
Collections
Languages
Organizations
Community
Blog
Posts
Daily Papers
Learn
Discord
Forum
GitHub
Solutions
Team & Enterprise
Hugging Face PRO
Enterprise Support
Inference Providers
Inference Endpoints
Storage Buckets
Log In
Sign Up
In a Training Loop 🔄
25
PhanKhai
ZycckZ
Follow
Alirazag's profile picture
1 follower
·
14 following
Zyc
ZycckZ
AI & ML interests
QuestionAnswering and LLMs
Recent Activity
reacted
to
Imosu
's
post
with 🔥
1 day ago
# ZeroGPU Hardware Mismatch: Why Am I Getting RTX PRO 6000 Blackwell MIG Instead of the Documented H200? I recently ran into a surprising issue while debugging a Hugging Face ZeroGPU Space. According to the Hugging Face ZeroGPU documentation, ZeroGPU is described as using NVIDIA H200-based resources, with configurations such as “large” and “xlarge” offering H200-class memory. However, when I printed the actual GPU information inside my Space, I got something different: ```txt GPU: NVIDIA RTX PRO 6000 Blackwell Server Edition MIG 2g.48gb Capability: (12, 0) Torch: 2.8.0+cu128 CUDA: 12.8 This is not an H200. It appears to be a MIG slice of an RTX PRO 6000 Blackwell Server Edition GPU, with 48GB VRAM. This difference matters. It is not just a cosmetic hardware-name issue. In my case, the Space was running Qwen3-TTS and failed with: CUDA error: no kernel image is available for execution on the device The issue appears related to GPU architecture compatibility. The app was using kernels-community/flash-attn3, which is generally aligned with Hopper-class GPUs such as H100/H200, but the actual device exposed to the Space was Blackwell with compute capability 12.0. As a result, CUDA kernels that might work on the expected H200 environment failed on the actual assigned GPU. To be clear, I am not saying the RTX PRO 6000 Blackwell is a bad GPU. It is a newer architecture and may be powerful in many workloads. But it is not the same as H200, and the software ecosystem compatibility is different. For ML workloads, especially those relying on custom CUDA kernels, the exact GPU architecture matters a lot. This raises a few questions: Is Hugging Face ZeroGPU now assigning RTX PRO 6000 Blackwell MIG instances instead of H200 instances? If yes, why is this not clearly documented?
liked
a Space
17 days ago
stabilityai/stable-fast-3d
liked
a model
19 days ago
deepseek-ai/DeepSeek-V4-Pro
View all activity
Organizations
None yet
ZycckZ
's datasets
1
Sort: Recently updated
ZycckZ/HandGesture_LandmarkCoordinates_Skeleton
Viewer
•
Updated
Sep 16, 2025
•
18.6k
•
9
•
1