PEFT documentation
BEFT: Bias-Efficient Fine-Tuning of Language Models in Low-Data Regimes
BEFT: Bias-Efficient Fine-Tuning of Language Models in Low-Data Regimes
BEFT is a parameter efficient fine-tuning algorithm (PEFT) that only fine-tunes the added bias terms of value projections from pretrained transformer models. BEFT demonstrates that fine-tuning the added bias terms of value projections from pretrained transformers generally leads to a higher downstream performance in low-data regimes than fine-tuning the added bias terms of query/key projections.
BEFT currently has the following tradeoffs:
Pros:
- BEFT requires far fewer parameters than LoRA, while maintaining competitive or superior performance across tasks in low-data regimes.
Cons:
- In high-data regimes, BEFT may show limited effectiveness compared to LoRA and full-parameters fine-tuning.
If your use case belongs to the high-data regime, consider other PEFT methods such as LoRA.
The abstract from the paper is:
Fine-tuning the bias terms of large language models (LLMs) has the potential to achieve unprecedented parameter efficiency while maintaining competitive performance, particularly in low-data regimes. However, the link between fine-tuning different bias terms (i.e., bq, bk, and bv in the query, key, or value projections) and downstream performance remains largely unclear to date. In this paper, we investigate the link between fine-tuning bq, bk, and bv with the performance of the downstream task. Our key finding is that directly fine-tuning bv generally leads to higher downstream performance in low-data regimes, in comparison to bq and bk. We extensively evaluate this unique property across a wide range of LLMs spanning encoder-only and decoder-only architectures up to 6.7B parameters (including bias-free LLMs). Our results provide strong evidence for the effectiveness of directly fine-tuning bv across various downstream tasks.
BeftConfig
class peft.BeftConfig
< source >( task_type: Optional[Union[str, TaskType]] = None peft_type: Optional[Union[str, PeftType]] = None auto_mapping: Optional[dict] = None peft_version: Optional[str] = None base_model_name_or_path: Optional[str] = None revision: Optional[str] = None inference_mode: bool = False target_modules: Optional[Union[list[str], str]] = None modules_to_save: Optional[list[str]] = None init_weights: bool = True )
Parameters
- target_modules (
Optional[Union[List[str], str]]) — The names of the modules to apply the adapter to. If this is specified, only the modules with the specified names will be replaced. When passing a string, a regex match will be performed. When passing a list of strings, either an exact match will be performed or it is checked if the name of the module ends with any of the passed strings. If this is not specified, modules will be chosen according to the model architecture. If the architecture is not known, an error will be raised — in this case, you should specify the target modules manually. - modules_to_save (
Optional[List[str]]) — List of modules apart from BEFT layers to be set as trainable and saved in the final checkpoint. - init_weights (
bool) — Whether to initialize the vectors in the BEFT layers, defaults toTrue. Setting this toFalseis discouraged.
This is the configuration class to store the configuration of a BeftModel.
BeftModel
class peft.BeftModel
< source >( model peft_config: Union[PeftConfig, dict[str, PeftConfig]] adapter_name: str low_cpu_mem_usage: bool = False state_dict: Optional[dict[str, torch.Tensor]] = None ) → torch.nn.Module
Parameters
- model (PreTrainedModel) — The model to be adapted.
- config (BeftConfig) — The configuration of the (BEFT) model.
- adapter_name (
str) — The name of the adapter, defaults to"default". - low_cpu_mem_usage (
bool,optional, defaults toFalse) — Create empty adapter weights on meta device. Useful to speed up the loading process.
Returns
torch.nn.Module
The (BEFT) model.
Creates a Infused Adapter by only fine-tuning the added bias terms of value projections from a pretrained transformers model in low-training-data regimes (BEFT). The method is described in detail in https://arxiv.org/abs/2509.15974
Example:
>>> from transformers import AutoModelForSeq2SeqLM
>>> from peft import BeftModel, BeftConfig
>>> config = BeftConfig(
... peft_type="Beft",
... task_type="SEQ_2_SEQ_LM",
... target_modules=["v"],
... )
>>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base")
>>> beft_model = BeftModel(model, config, adapter_name="default")Attributes:
- model (PreTrainedModel) — The model to be adapted.
- peft_config (BeftConfig): The configuration of the (BEFT) model.