FastSam-X: Optimized for Qualcomm Devices

The Fast Segment Anything Model (FastSAM) is a novel, real-time CNN-based solution for the Segment Anything task. This task is designed to segment any object within an image based on various possible user interaction prompts. The model performs competitively despite significantly reduced computation, making it a practical choice for a variety of vision tasks.

This is based on the implementation of FastSam-X found here. This repository contains pre-exported model files optimized for Qualcomm® devices. You can use the Qualcomm® AI Hub Models library to export with custom configurations. More details on model performance across various devices, can be found here.

Qualcomm AI Hub Models uses Qualcomm AI Hub Workbench to compile, profile, and evaluate this model. Sign up to run these models on a hosted Qualcomm® device.

Getting Started

There are two ways to deploy this model on your device:

Option 1: Download Pre-Exported Models

Below are pre-exported model assets ready for deployment.

Runtime Precision Chipset SDK Versions Download
ONNX float Universal QAIRT 2.37, ONNX Runtime 1.23.0 Download
QNN_DLC float Universal QAIRT 2.42 Download
TFLITE float Universal QAIRT 2.42, TFLite 2.17.0 Download

For more device-specific assets and performance metrics, visit FastSam-X on Qualcomm® AI Hub.

Option 2: Export with Custom Configurations

Use the Qualcomm® AI Hub Models Python library to compile and export the model with your own:

  • Custom weights (e.g., fine-tuned checkpoints)
  • Custom input shapes
  • Target device and runtime configurations

This option is ideal if you need to customize the model beyond the default configuration provided here.

See our repository for FastSam-X on GitHub for usage instructions.

Model Details

Model Type: Model_use_case.semantic_segmentation

Model Stats:

  • Model checkpoint: fastsam-x.pt
  • Inference latency: RealTime
  • Input resolution: 640x640
  • Number of parameters: 72.2M
  • Model size (float): 276 MB

Performance Summary

Model Runtime Precision Chipset Inference Time (ms) Peak Memory Range (MB) Primary Compute Unit
FastSam-X ONNX float Snapdragon® X Elite 46.486 ms 139 - 139 MB NPU
FastSam-X ONNX float Snapdragon® 8 Gen 3 Mobile 36.497 ms 4 - 267 MB NPU
FastSam-X ONNX float Qualcomm® QCS8550 (Proxy) 46.077 ms 11 - 14 MB NPU
FastSam-X ONNX float Qualcomm® QCS9075 73.748 ms 11 - 19 MB NPU
FastSam-X ONNX float Snapdragon® 8 Elite For Galaxy Mobile 27.37 ms 12 - 187 MB NPU
FastSam-X ONNX float Snapdragon® 8 Elite Gen 5 Mobile 18.362 ms 1 - 183 MB NPU
FastSam-X QNN_DLC float Snapdragon® X Elite 43.841 ms 5 - 5 MB NPU
FastSam-X QNN_DLC float Snapdragon® 8 Gen 3 Mobile 32.661 ms 3 - 314 MB NPU
FastSam-X QNN_DLC float Qualcomm® QCS8275 (Proxy) 279.828 ms 2 - 223 MB NPU
FastSam-X QNN_DLC float Qualcomm® QCS8550 (Proxy) 43.11 ms 5 - 7 MB NPU
FastSam-X QNN_DLC float Qualcomm® SA8775P 68.478 ms 0 - 222 MB NPU
FastSam-X QNN_DLC float Qualcomm® QCS9075 70.434 ms 7 - 17 MB NPU
FastSam-X QNN_DLC float Qualcomm® QCS8450 (Proxy) 93.211 ms 2 - 392 MB NPU
FastSam-X QNN_DLC float Qualcomm® SA7255P 279.828 ms 2 - 223 MB NPU
FastSam-X QNN_DLC float Qualcomm® SA8295P 77.966 ms 0 - 296 MB NPU
FastSam-X QNN_DLC float Snapdragon® 8 Elite For Galaxy Mobile 25.29 ms 0 - 222 MB NPU
FastSam-X QNN_DLC float Snapdragon® 8 Elite Gen 5 Mobile 17.889 ms 5 - 242 MB NPU
FastSam-X TFLITE float Snapdragon® 8 Gen 3 Mobile 32.534 ms 3 - 443 MB NPU
FastSam-X TFLITE float Qualcomm® QCS8275 (Proxy) 279.179 ms 4 - 269 MB NPU
FastSam-X TFLITE float Qualcomm® QCS8550 (Proxy) 42.096 ms 4 - 43 MB NPU
FastSam-X TFLITE float Qualcomm® SA8775P 68.042 ms 4 - 269 MB NPU
FastSam-X TFLITE float Qualcomm® QCS9075 70.216 ms 4 - 158 MB NPU
FastSam-X TFLITE float Qualcomm® QCS8450 (Proxy) 92.525 ms 5 - 525 MB NPU
FastSam-X TFLITE float Qualcomm® SA7255P 279.179 ms 4 - 269 MB NPU
FastSam-X TFLITE float Qualcomm® SA8295P 77.396 ms 4 - 343 MB NPU
FastSam-X TFLITE float Snapdragon® 8 Elite For Galaxy Mobile 25.087 ms 4 - 271 MB NPU
FastSam-X TFLITE float Snapdragon® 8 Elite Gen 5 Mobile 17.21 ms 4 - 276 MB NPU

License

  • The license for the original implementation of FastSam-X can be found here.

References

Community

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for qualcomm/FastSam-X