This is the repository card of {repo_id} that has been pushed on the Hub. It was built to be used with the kernels library. This card was automatically generated.
How to use
# make sure `kernels` is installed: `pip install -U kernels`
from kernels import get_kernel
kernel_module = get_kernel("kernels-community/flash-attn3") # <- change the ID if needed
flash_attn_combine = kernel_module.flash_attn_combine
flash_attn_combine(...)
Available functions
flash_attn_combineflash_attn_funcflash_attn_qkvpacked_funcflash_attn_varlen_funcflash_attn_with_kvcacheget_scheduler_metadata
Supported backends
- cuda
CUDA Capabilities
- 8.0
- 9.0a
Benchmarks
Benchmarking script is available for this kernel. Make sure to run kernels benchmark org-id/repo-id (replace "org-id" and "repo-id" with actual values).
[TODO: provide benchmarks if available]
Source code
[TODO: provide original source code and other relevant citations if available]
Notes
[TODO: provide additional notes about this kernel if needed]
- Downloads last month
- 221,054
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support