Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Website
Tasks
HuggingChat
Collections
Languages
Organizations
Community
Blog
Posts
Daily Papers
Learn
Discord
Forum
GitHub
Solutions
Team & Enterprise
Hugging Face PRO
Enterprise Support
Inference Providers
Inference Endpoints
Storage Buckets
Log In
Sign Up
Open to Collab
3
1
13
mosu
PRO
Imosu
Follow
BRB223045's profile picture
21world's profile picture
bennybearlover's profile picture
9 followers
·
13 following
AI & ML interests
None yet
Recent Activity
updated
a Space
about 6 hours ago
Imosu/Practise_in_hand
replied
to
their
post
2 days ago
# ZeroGPU Hardware Mismatch: Why Am I Getting RTX PRO 6000 Blackwell MIG Instead of the Documented H200? I recently ran into a surprising issue while debugging a Hugging Face ZeroGPU Space. According to the Hugging Face ZeroGPU documentation, ZeroGPU is described as using NVIDIA H200-based resources, with configurations such as “large” and “xlarge” offering H200-class memory. However, when I printed the actual GPU information inside my Space, I got something different: ```txt GPU: NVIDIA RTX PRO 6000 Blackwell Server Edition MIG 2g.48gb Capability: (12, 0) Torch: 2.8.0+cu128 CUDA: 12.8 This is not an H200. It appears to be a MIG slice of an RTX PRO 6000 Blackwell Server Edition GPU, with 48GB VRAM. This difference matters. It is not just a cosmetic hardware-name issue. In my case, the Space was running Qwen3-TTS and failed with: CUDA error: no kernel image is available for execution on the device The issue appears related to GPU architecture compatibility. The app was using kernels-community/flash-attn3, which is generally aligned with Hopper-class GPUs such as H100/H200, but the actual device exposed to the Space was Blackwell with compute capability 12.0. As a result, CUDA kernels that might work on the expected H200 environment failed on the actual assigned GPU. To be clear, I am not saying the RTX PRO 6000 Blackwell is a bad GPU. It is a newer architecture and may be powerful in many workloads. But it is not the same as H200, and the software ecosystem compatibility is different. For ML workloads, especially those relying on custom CUDA kernels, the exact GPU architecture matters a lot. This raises a few questions: Is Hugging Face ZeroGPU now assigning RTX PRO 6000 Blackwell MIG instances instead of H200 instances? If yes, why is this not clearly documented?
published
a Space
3 days ago
Imosu/Practise_in_hand
View all activity
Organizations
Imosu
's buckets
1
Sort: Recently updated
Imosu/My-Image-Space-storage
67 GB