Additional Jetson Orin Nano Super benchmarks will be added continuously.
Jonna Matthiesen
AI & ML interests
None yet
Recent Activity
replied to their post about 7 hours ago
Qwen3.5 on-device benchmarks on the Nvidia Jetson lineup are now live ๐
We've added the latest Qwen3.5 models (0
8B - 9B) to our on-device inference benchmarks (Nvidia Jetson Orin Nano Super, AGX Orin, AGX Thor).
๐ Explore TPS, TTFT, E2E latency, and TPOT. Measured on real hardware: https://huggingface.co/spaces/embedl/Edge-Inference-Benchmarks
๐ Stay tuned for additional benchmarks and Embedl-optimized models: Enabling models run faster and on less expensive hardware.
If you're working on edge LLM deployment, we'd love to discuss your use case. posted an
update
about 7 hours ago
Qwen3.5 on-device benchmarks on the Nvidia Jetson lineup are now live ๐
We've added the latest Qwen3.5 models (0
8B - 9B) to our on-device inference benchmarks (Nvidia Jetson Orin Nano Super, AGX Orin, AGX Thor).
๐ Explore TPS, TTFT, E2E latency, and TPOT. Measured on real hardware: https://huggingface.co/spaces/embedl/Edge-Inference-Benchmarks
๐ Stay tuned for additional benchmarks and Embedl-optimized models: Enabling models run faster and on less expensive hardware.
If you're working on edge LLM deployment, we'd love to discuss your use case. updated
a Space about 12 hours ago
embedl/Edge-Inference-Benchmarks