Kernels

This is the repository card of {repo_id} that has been pushed on the Hub. It was built to be used with the kernels library. This card was automatically generated.

How to use

# make sure `kernels` is installed: `pip install -U kernels`
from kernels import get_kernel

kernel_module = get_kernel("kernels-community/flash-attn3") # <- change the ID if needed
flash_attn_combine = kernel_module.flash_attn_combine

flash_attn_combine(...)

Available functions

  • flash_attn_combine
  • flash_attn_func
  • flash_attn_qkvpacked_func
  • flash_attn_varlen_func
  • flash_attn_with_kvcache
  • get_scheduler_metadata

Supported backends

  • cuda

CUDA Capabilities

  • 8.0
  • 9.0a

Benchmarks

Benchmarking script is available for this kernel. Make sure to run kernels benchmark org-id/repo-id (replace "org-id" and "repo-id" with actual values).

[TODO: provide benchmarks if available]

Source code

[TODO: provide original source code and other relevant citations if available]

Notes

[TODO: provide additional notes about this kernel if needed]

Downloads last month
221,054
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Spaces using kernels-community/flash-attn3 80