repo
string
github_id
int64
github_node_id
string
number
int64
html_url
string
api_url
string
title
string
body
string
state
string
state_reason
string
locked
bool
comments_count
int64
labels
list
assignees
list
created_at
string
updated_at
string
closed_at
string
author_association
string
milestone_title
string
snapshot_id
string
extracted_at
string
author_login
string
author_id
int64
author_node_id
string
author_type
string
author_site_admin
bool
huggingface/transformers
836,684,366
MDU6SXNzdWU4MzY2ODQzNjY=
10,816
https://github.com/huggingface/transformers/issues/10816
https://api.github.com/repos/huggingface/transformers/issues/10816
[trainer] figuring out why eval with `--fp16_full_eval` is 25% slower
Recently HF trainer was extended to support full fp16 eval via `--fp16_full_eval`. I'd have expected it to be either equal or faster than eval with fp32 model, but surprisingly I have noticed a 25% slowdown when using it. This may or may not impact deepspeed as well, which also runs eval in fp16, but we can't compare it to a baseline, since it only runs fp16. I wonder if someone would like to research where the slowdown comes from. I'd probably isolate the `model.half()` call which should be a constant and focus on the rest of the eval. I'm thinking that some component doesn't take well to fp16 variables. e.g. label smoothing was problematic and now should be fixed in https://github.com/huggingface/transformers/pull/10815, but I tested w/ and w/o label smoothing and it's not adding to the slowdown. Here are the script and the corresponding metrics. First w/o `--fp16_full_eval`, ``` export BS=16; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 \ ./examples/seq2seq/run_translation.py --model_name_or_path t5-small --output_dir /tmp/zero3 \ --overwrite_output_dir --max_train_samples 10 --max_val_samples 100 --max_source_length 12 \ --max_target_length 128 --val_max_target_length 128 --do_train --num_train_epochs 1 \ --per_device_train_batch_size 2 --learning_rate 3e-3 --warmup_steps 8 --predict_with_generate \ --logging_steps 0 --save_steps 2 --eval_steps 1 --group_by_length --adafactor --dataset_name wmt16 \ --dataset_config ro-en --source_lang en --target_lang ro \ --source_prefix "translate English to Romanian: " --do_eval ***** train metrics ***** epoch = 1.0 init_mem_cpu_alloc_delta = 2MB init_mem_cpu_peaked_delta = 0MB init_mem_gpu_alloc_delta = 230MB init_mem_gpu_peaked_delta = 0MB train_mem_cpu_alloc_delta = 60MB train_mem_cpu_peaked_delta = 63MB train_mem_gpu_alloc_delta = 231MB train_mem_gpu_peaked_delta = 194MB train_runtime = 7.7162 train_samples = 10 train_samples_per_second = 0.648 ***** eval metrics ***** epoch = 1.0 eval_bleu = 2.4612 eval_gen_len = 18.53 eval_loss = 5.017 eval_mem_cpu_alloc_delta = 0MB eval_mem_cpu_peaked_delta = 0MB eval_mem_gpu_alloc_delta = 0MB eval_mem_gpu_peaked_delta = 244MB eval_runtime = 4.6481 eval_samples = 100 eval_samples_per_second = 21.514 ``` now let's add `--fp16_full_eval`: ``` export BS=16; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 \ ./examples/seq2seq/run_translation.py --model_name_or_path t5-small --output_dir /tmp/zero3 \ --overwrite_output_dir --max_train_samples 10 --max_val_samples 100 --max_source_length 12 \ --max_target_length 128 --val_max_target_length 128 --do_train --num_train_epochs 1 \ --per_device_train_batch_size 2 --learning_rate 3e-3 --warmup_steps 8 --predict_with_generate \ --logging_steps 0 --save_steps 2 --eval_steps 1 --group_by_length --adafactor --dataset_name wmt16 \ --dataset_config ro-en --source_lang en --target_lang ro \ --source_prefix "translate English to Romanian: " --do_eval \ --fp16_full_eval ***** train metrics ***** epoch = 1.0 init_mem_cpu_alloc_delta = 2MB init_mem_cpu_peaked_delta = 0MB init_mem_gpu_alloc_delta = 230MB init_mem_gpu_peaked_delta = 0MB train_mem_cpu_alloc_delta = 60MB train_mem_cpu_peaked_delta = 63MB train_mem_gpu_alloc_delta = 231MB train_mem_gpu_peaked_delta = 194MB train_runtime = 7.1477 train_samples = 10 train_samples_per_second = 0.7 ***** eval metrics ***** epoch = 1.0 eval_bleu = 2.4612 eval_gen_len = 18.53 eval_loss = 5.0168 eval_mem_cpu_alloc_delta = 0MB eval_mem_cpu_peaked_delta = 0MB eval_mem_gpu_alloc_delta = -231MB eval_mem_gpu_peaked_delta = 262MB eval_runtime = 6.0125 eval_samples = 100 eval_samples_per_second = 16.632 ``` As you can see w/o `--fp16_full_eval`: we get ~22 samples per sec and w/ it only ~17/ - that's a huge difference. I also tested with a larger sample and the gap remains constant. The halving happens here: https://github.com/huggingface/transformers/blob/21e86f99e6b91af2e4df3790ba6c781e85fa0eb5/src/transformers/trainer.py#L1800 Thank you!
closed
completed
false
11
[ "Good First Issue", "Good Second Issue" ]
[]
2021-03-20T04:30:07Z
2026-03-02T13:59:47Z
2026-03-02T13:59:47Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
stas00
10,676,103
MDQ6VXNlcjEwNjc2MTAz
User
false
huggingface/transformers
860,870,722
MDU6SXNzdWU4NjA4NzA3MjI=
11,307
https://github.com/huggingface/transformers/issues/11307
https://api.github.com/repos/huggingface/transformers/issues/11307
Getting time offsets of beginning and end of each word in Wav2Vec2
# 🚀 Feature request Hello I was thinking it would be of great help if I can get the time offsets of start and end of each word . ## Motivation I was going through Google Speech to text documentation and found this [feature](https://cloud.google.com/speech-to-text/docs/async-time-offsets) and thought will be really amazing if i can have something similar here. <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> ## Your contribution I can really use some help in this task and would love to implement something similar. <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
closed
completed
false
27
[ "Good First Issue", "Good Second Issue" ]
[]
2021-04-19T03:57:57Z
2026-02-26T14:14:43Z
2026-02-26T14:14:43Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
theainerd
15,798,640
MDQ6VXNlcjE1Nzk4NjQw
User
false
huggingface/transformers
919,408,065
MDU6SXNzdWU5MTk0MDgwNjU=
12,126
https://github.com/huggingface/transformers/issues/12126
https://api.github.com/repos/huggingface/transformers/issues/12126
[Performance] Tracking open Issues and PRs (pytorch transformers)
Let's use this Issue to track performance issues and enhancement requests, so it's easier to prioritize the work. **This is for pytorch `transformers`** Also I will label it as a `Good Difficult Issue` in case someone is ready for a challenging but rewarding experience of figuring things out. If you do want to take the challenge comment in the corresponding Issue/PR that resonates with you so others would know you're working on it. If I missed any other relevant open performance-related Issues/PRs that need attention please comment below. ## Regression: - [ ] https://github.com/huggingface/transformers/pull/11218 Regression after Bart-like refactoring - need to compare the original Bart refactoring PR since most likely the regression happened there. - [ ] ## Odd slowness: - [ ] https://github.com/huggingface/transformers/issues/10816 figuring out why eval with --fp16_full_eval is 25% slower - [ ] ## Fused kernels possibilities: - [ ] https://github.com/huggingface/transformers/issues/11368 Megatron fused CUDA kernels to improve Hugging Face model classes' scalability - [ ] research pytorch kernels? - [ ] I know Deepspeed has various kernels that we might be able to use ## Faster / leaner startup / module loading - [ ] https://github.com/huggingface/transformers/issues/12274 - skip storage allocation which gets dropped for pretrained weights ## Faster optimizers - [ ] https://github.com/huggingface/transformers/issues/12084 - a proposal to port `MemoryEfficientFP16Optimizer` from fairseq - [ ] https://github.com/huggingface/transformers/issues/9965 - `torch.optim._multi_tensor` faster optimizers - having some bottleneck in the test script - need to profile ## Scalability - [ ] https://github.com/huggingface/transformers/issues/10321 Tensor Parallelism ## Deepspeed-specific features - [ ] https://github.com/huggingface/transformers/issues/9606 a list of features that can be integrated - [ ] https://github.com/huggingface/transformers/issues/12273 - make `from_pretrained` loading faster ## Tests - [ ] No issue yet, but we really need to add performance regression tests
closed
completed
false
3
[ "Good First Issue", "Performance", "Good Difficult Issue" ]
[ "stas00", "patil-suraj" ]
2021-06-12T03:45:57Z
2026-03-02T14:18:44Z
2026-03-02T14:18:44Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
stas00
10,676,103
MDQ6VXNlcjEwNjc2MTAz
User
false
huggingface/transformers
921,433,978
MDU6SXNzdWU5MjE0MzM5Nzg=
12,177
https://github.com/huggingface/transformers/issues/12177
https://api.github.com/repos/huggingface/transformers/issues/12177
Exception during hyperparameter search with Ray and transformers library starting from version 4.5.0
I currently face the problem that with recent versions of the transformers library (issue starting at version 4.5.0) the hyperparameter search with ray tune runs into a serialization issue described below. ## Environment info - `transformers` version: 4.5.0 - Platform: Linux-4.19.0-16-amd64-x86_64-with-glibc2.17 - Python version: 3.8.10 - PyTorch version (GPU?): 1.8.1+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no - Ray version: 1.4.0 ### Who can help Maybe it is interesting to @richardliaw and @amogkam because they were mentioned as responsible for ray/raytune. ## Information Model I am using (Bert, XLNet ...): distilbert-base-uncased ( model doesn't matter) The problem arises when using: * [ x] my own modified scripts: (give details below) The tasks I am working on is: * [x ] an official GLUE/SQUaD task: (give the name): GLUE mrpc ## To reproduce I have created a small working example which shows the error which (at least) I get:. The code is mainly based on the [blog entry covering ray tune](https://huggingface.co/blog/ray-tune) ```python import os os.environ['TOKENIZERS_PARALLELISM'] = 'false' from datasets import load_dataset, load_metric from transformers import AutoTokenizer, AutoModelForSequenceClassification, Trainer, TrainingArguments from ray import tune from ray.util import inspect_serializability model_name = 'distilbert-base-uncased' tokenizer = AutoTokenizer.from_pretrained(model_name) dataset = load_dataset('glue', 'mrpc') def encode(examples): outputs = tokenizer(examples['sentence1'], examples['sentence2'], truncation=True) return outputs encoded_dataset = dataset.map(encode, batched=True) def model_init(): return AutoModelForSequenceClassification.from_pretrained(model_name, return_dict=True) def compute_metrics(eval_pred): metric = load_metric('glue', 'mrpc') predictions, labels = eval_pred predictions = predictions.argmax(axis=-1) return metric.compute(predictions=predictions, references=labels) training_args = TrainingArguments("test") trainer = Trainer( args=training_args, tokenizer=tokenizer, train_dataset=encoded_dataset["train"], eval_dataset=encoded_dataset["validation"], model_init=model_init, compute_metrics=compute_metrics, ) def search_params(trial): return { #toy example "learning_rate": tune.grid_search([0.000001, 0.00001, 0.0001, 0.001]), } trainer.hyperparameter_search( direction="maximize", backend="ray", hp_space = search_params, n_trials=1, ) ``` This code snippet works with transformers version 4.4.2 and ealier but not on versions 4.5.0 and later. The error which appeared is ```python Traceback (most recent call last): File "working_example.py", line 48, in <module> trainer.hyperparameter_search( File "/site-packages/transformers/trainer.py", line 1459, in hyperparameter_search best_run = run_hp_search(self, n_trials, direction, **kwargs) File "/site-packages/transformers/integrations.py", line 231, in run_hp_search_ray analysis = ray.tune.run( File "/site-packages/ray/tune/tune.py", line 297, in run _ray_auto_init() File "/site-packages/ray/tune/tune.py", line 664, in _ray_auto_init ray.init() File "/site-packages/ray/_private/client_mode_hook.py", line 62, in wrapper return func(*args, **kwargs) File "/site-packages/ray/worker.py", line 866, in init hook() File "/site-packages/ray/tune/registry.py", line 171, in flush self.references[k] = ray.put(v) File "/site-packages/ray/_private/client_mode_hook.py", line 62, in wrapper return func(*args, **kwargs) File "/site-packages/ray/worker.py", line 1527, in put object_ref = worker.put_object(value) File "/site-packages/ray/worker.py", line 280, in put_object serialized_value = self.get_serialization_context().serialize(value) File "/site-packages/ray/serialization.py", line 326, in serialize return self._serialize_to_msgpack(value) File "/site-packages/ray/serialization.py", line 306, in _serialize_to_msgpack self._serialize_to_pickle5(metadata, python_objects) File "/site-packages/ray/serialization.py", line 266, in _serialize_to_pickle5 raise e File "/site-packages/ray/serialization.py", line 262, in _serialize_to_pickle5 inband = pickle.dumps( File "/site-packages/ray/cloudpickle/cloudpickle_fast.py", line 73, in dumps cp.dump(obj) File "/site-packages/ray/cloudpickle/cloudpickle_fast.py", line 580, in dump return Pickler.dump(self, obj) TypeError: cannot pickle '_thread.RLock' object ``` Based on this error, I searched for code to check which part is not serializable (because the whole trainer is transferred to each ray trial). I found the [ray serialization page](https://docs.ray.io/en/master/serialization.html#troubleshooting) and executed ```python inspect_serializability(trainer, name="test") ``` The output was: ``` ================================================================================ Checking Serializability of <transformers.trainer.Trainer object at 0x7fce1cbbeee0> ================================================================================ !!! FAIL serialization: cannot pickle '_thread.RLock' object Serializing 'compute_metrics' <function compute_metrics at 0x7fce1cb5b5e0>... Serializing 'model_init' <function model_init at 0x7fce1cb5b550>... Serializing '_gather_and_numpify' <bound method Trainer._gather_and_numpify of <transformers.trainer.Trainer object at 0x7fce1cbbeee0>>... !!! FAIL serialization: cannot pickle '_thread.RLock' object Serializing '__func__' <function Trainer._gather_and_numpify at 0x7fce1f739940>... WARNING: Did not find non-serializable object in <bound method Trainer._gather_and_numpify of <transformers.trainer.Trainer object at 0x7fce1cbbeee0>>. This may be an oversight. ================================================================================ Variable: FailTuple(_gather_and_numpify [obj=<bound method Trainer._gather_and_numpify of <transformers.trainer.Trainer object at 0x7fce1cbbeee0>>, parent=<transformers.trainer.Trainer object at 0x7fce1cbbeee0>]) was found to be non-serializable. There may be multiple other undetected variables that were non-serializable. Consider either removing the instantiation/imports of these variables or moving the instantiation into the scope of the function/class. If you have any suggestions on how to improve this error message, please reach out to the Ray developers on github.com/ray-project/ray/issues/ ================================================================================ ``` I did not find any major changes between version 4.4.2 and 4.5.0 with regards to integrations.py and trainer.py. I think the first step would be, that someone else reproduce the behaviour if possible (maybe something is also wrong on my side/setup).
closed
completed
false
4
[]
[]
2021-06-15T14:02:20Z
2026-02-26T12:32:52Z
2021-06-15T18:53:20Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
sven-h
8,777,506
MDQ6VXNlcjg3Nzc1MDY=
User
false
huggingface/transformers
978,451,864
MDU6SXNzdWU5Nzg0NTE4NjQ=
13,244
https://github.com/huggingface/transformers/issues/13244
https://api.github.com/repos/huggingface/transformers/issues/13244
Tapas tokenization Different from Tensorflow Code
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.9.1 ### Who can help @LysandreJik @sgugger @NielsRogge ## Information Model I am using (Bert, XLNet ...): Tapas When I am trying to replicate the TAPAS table retrieval results using Huggingface Tapas implementation, I find that [Tapas tokenization in Huggingface](https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py#L1314) is different from the original [Tensorflow code ](https://github.com/google-research/tapas/blob/master/tapas/utils/tf_example_utils.py#L391). The original code first checks whether the table cell is "n/a", "?" or empty. If so, it would return "[EMPTY]" token. The Huggingface code has implemented [the same tokenization](https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py#L370) with the tensorflow code, but it is not used to tokenize the tables. It could be easily fixed by changing all the calls of function `self.tokenize` to `self._tokenize` in the `_tokenize_table` function. After fixing this, I could use the released table retrieval model to replicate their results on NQ dataset with Huggingface Tapas.
closed
completed
false
12
[ "Good First Issue" ]
[]
2021-08-24T20:19:40Z
2026-01-26T12:57:44Z
2026-01-26T12:57:26Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
Doreenruirui
8,978,500
MDQ6VXNlcjg5Nzg1MDA=
User
false
huggingface/transformers
1,050,733,132
I_kwDOCUB6oc4-oOpM
14,368
https://github.com/huggingface/transformers/issues/14368
https://api.github.com/repos/huggingface/transformers/issues/14368
Export LayoutLMv2 to onnx
I am trying to export LayoutLMv2 model to onnx but there is no support for that available in transformers library. I have tried to follow the method available for layoutLM but that is not working. Here is config class for LayoutLMv2 ``` class LayoutLMv2OnnxConfig(OnnxConfig): def __init__( self, config: PretrainedConfig, task: str = "default", patching_specs: List[PatchingSpec] = None, ): super().__init__(config, task=task, patching_specs=patching_specs) self.max_2d_positions = config.max_2d_position_embeddings - 1 @property def inputs(self) -> Mapping[str, Mapping[int, str]]: return OrderedDict( [ ("input_ids", {0: "batch", 1: "sequence"}), ("bbox", {0: "batch", 1: "sequence"}), ("image", {0: "batch", 1: "sequence"}), ("attention_mask", {0: "batch", 1: "sequence"}), ("token_type_ids", {0: "batch", 1: "sequence"}), ] ) def generate_dummy_inputs( self, tokenizer: PreTrainedTokenizer, batch_size: int = -1, seq_length: int = -1, is_pair: bool = False, framework: Optional[TensorType] = None, ) -> Mapping[str, Any]: """ Generate inputs to provide to the ONNX exporter for the specific framework Args: tokenizer: The tokenizer associated with this model configuration batch_size: The batch size (int) to export the model for (-1 means dynamic axis) seq_length: The sequence length (int) to export the model for (-1 means dynamic axis) is_pair: Indicate if the input is a pair (sentence 1, sentence 2) framework: The framework (optional) the tokenizer will generate tensor for Returns: Mapping[str, Tensor] holding the kwargs to provide to the model's forward function """ input_dict = super().generate_dummy_inputs(tokenizer, batch_size, seq_length, is_pair, framework) # Generate a dummy bbox box = [48, 84, 73, 128] if not framework == TensorType.PYTORCH: raise NotImplementedError("Exporting LayoutLM to ONNX is currently only supported for PyTorch.") if not is_torch_available(): raise ValueError("Cannot generate dummy inputs without PyTorch installed.") import torch batch_size, seq_length = input_dict["input_ids"].shape input_dict["bbox"] = torch.tensor([*[box] * seq_length]).tile(batch_size, 1, 1) return input_dict onnx_config = LayoutLMv2OnnxConfig(model.config) export(tokenizer=tokenizer, model=model, config=onnx_config, opset=12, output=Path('onnx/layoutlmv2.onnx')) ``` Running the export line is raising this error, ``` --------------------------------------------------------------------------- AssertionError Traceback (most recent call last) <ipython-input-25-99a1f167e396> in <module>() ----> 1 export(tokenizer=tokenizer, model=model, config=onnx_config, opset=12, output=Path('onnx/layoutlmv2.onnx')) 3 frames /usr/local/lib/python3.7/dist-packages/transformers/models/layoutlmv2/tokenization_layoutlmv2.py in __call__(self, text, text_pair, boxes, word_labels, add_special_tokens, padding, truncation, max_length, stride, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs) 449 450 words = text if text_pair is None else text_pair --> 451 assert boxes is not None, "You must provide corresponding bounding boxes" 452 if is_batched: 453 assert len(words) == len(boxes), "You must provide words and boxes for an equal amount of examples" AssertionError: You must provide corresponding bounding boxes ```
closed
completed
false
28
[ "Good First Issue" ]
[]
2021-11-11T08:54:39Z
2026-03-20T08:32:38Z
2026-03-20T08:32:16Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
fadi212
37,739,280
MDQ6VXNlcjM3NzM5Mjgw
User
false
huggingface/transformers
1,115,366,508
I_kwDOCUB6oc5CeyRs
15,354
https://github.com/huggingface/transformers/issues/15354
https://api.github.com/repos/huggingface/transformers/issues/15354
GeneratorExp aren't supported by torch.jit.script when I try to export a previously trained model 'google/vit-base-patch16-224-in21k'.
## Environment info - `transformers` version: 4.15.0 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyTorch version (GPU?): 1.10.0+cu111 (True) - Tensorflow version (GPU?): 2.7.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help Models: ViTModel If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor. Library: - Vision: @NielsRogge, @sgugger Documentation: @sgugger Model hub: - for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator. ## Information GeneratorExp aren't supported by torch.jit.script when I try to export a previously trained model 'google/vit-base-patch16-224-in21k'. Model I am using (ViTModel): The problem arises when using: * [X] my own modified scripts: (give details below) * model_x = ViTForImageClassification.from_pretrained( 'google/vit-base-patch16-224-in21k', num_labels=len(label2id), label2id=label2id, id2label=id2label ) model_scripted = torch.jit.script(model_x) # Export to TorchScript --------------------------------------------------------------------------- UnsupportedNodeError Traceback (most recent call last) <ipython-input-12-bc467d8ea1c0> in <module>() 6 id2label=id2label 7 ) ----> 8 model_scripted = torch.jit.script(model_x) # Export to TorchScript 9 model_scripted.save('model_scripted.pt') # Save 14 frames /usr/local/lib/python3.7/dist-packages/torch/jit/frontend.py in __call__(self, ctx, node) 284 method = getattr(self, 'build_' + node.__class__.__name__, None) 285 if method is None: --> 286 raise UnsupportedNodeError(ctx, node) 287 return method(ctx, node) 288 UnsupportedNodeError: GeneratorExp aren't supported: File "/usr/local/lib/python3.7/dist-packages/transformers/modeling_utils.py", line 987 activations". """ return any(hasattr(m, "gradient_checkpointing") and m.gradient_checkpointing for m in self.modules()) ~ <--- HERE ## To reproduce Steps to reproduce the behavior: 1. from transformers import ViTForImageClassification 2. Instantiate a previously created mode 'google/vit-base-patch16-224-in21k' using ViTForImageClassification.from_pretrained() API. 3. Try invoking torch.jit.script(model_x) and you will see the error. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
closed
completed
false
5
[ "Good First Issue" ]
[]
2022-01-26T18:47:55Z
2026-03-09T13:09:33Z
2026-03-09T13:09:33Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
ssriram1978
12,517,415
MDQ6VXNlcjEyNTE3NDE1
User
false
huggingface/transformers
1,162,459,652
I_kwDOCUB6oc5FSboE
15,980
https://github.com/huggingface/transformers/issues/15980
https://api.github.com/repos/huggingface/transformers/issues/15980
Bad error message when downloading private model without being logged in.
Let's say an organization creates a private model and wants to share it with other team members which are less savy of `huggingface_hub` and `transformers`. So e.g. I create: https://huggingface.co/NewT5/dummy_model and want to share it with others. Now if I run: ```python from transformers import BertModel BertModel.from_pretrained("NewT5/dummy_model") ``` I'm getting a very nice error message: ``` OSError: NewT5/dummy_model is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models' If this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and pass `use_auth_token=True`. ``` After this error message I think people will have an easy time doing the correct thing which is passing **use_auth_token=True** and previously running `huggingface-cli login`. Now what will often happen though in my opinion is that someone will share the following code with unsavy coworkers / collaborators: ```python from transformers import BertModel BertModel.from_pretrained("NewT5/dummy_model", use_auth_token=True) ``` Now **if you are not logged in**, you are getting the following error message: ``` OSError: Can't load config for 'NewT5/dummy_model'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'NewT5/dummy_model' is the correct path to a directory containing a config.json file ``` This error message is not great really, because the problem is not that the model doesn't exist, but it's because the user didn't run `huggingface-cli login` I think it's worth fixing the error message here (maybe just the same as when passing `use_auth_token=True` is missing because IMO it's a common case that people will hare code with `use_auth_token=True`. We probably need to do this in moon-landing though no? ## Env - `transformers` version: 4.18.0.dev0 - Platform: Linux-5.15.15-76051515-generic-x86_64-with-glibc2.34 - Python version: 3.9.7 - PyTorch version (GPU?): 1.10.0 (False) - Tensorflow version (GPU?): 2.8.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.3.6 (cpu) - Jax version: 0.2.25 - JaxLib version: 0.1.73 and hugging face hub version: `0.4.0.dev0`
closed
completed
false
8
[]
[ "julien-c", "LysandreJik", "SBrandeis", "sgugger" ]
2022-03-08T10:06:05Z
2026-02-22T17:04:53Z
2022-06-21T15:07:36Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
patrickvonplaten
23,423,619
MDQ6VXNlcjIzNDIzNjE5
User
false
huggingface/transformers
1,219,113,876
I_kwDOCUB6oc5IqjOU
16,998
https://github.com/huggingface/transformers/issues/16998
https://api.github.com/repos/huggingface/transformers/issues/16998
Question on model_max_length (DeBERTa-V3)
### System Info ```shell - `transformers` version: 4.18.0 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.3 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): 1.5.1 (False) - Tensorflow version (GPU?): 2.4.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: N/A - Using distributed or parallel set-up in script?: N/A ``` ### Who can help? @LysandreJik @SaulLu ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I'm interested in finding out the max sequence length that a model can be run with. After some code browsing, my current understanding that this is a property stored in the tokenizer `model_max_length`. I wrote a simple script to load a tokenzier for a pretrained model and print the model max length. This is the important part: ``` # initialize the tokenizer to be able to print model_max_length tokenizer = AutoTokenizer.from_pretrained( model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path, cache_dir=model_args.cache_dir, use_fast=model_args.use_fast_tokenizer, revision=model_args.model_revision, use_auth_token=True if model_args.use_auth_token else None, ) logger.info(f"Model max length {tokenizer.model_max_length}") ``` I used this to print max seq length for models such as BERT, RoBERTa, etc. All with expected results. For DeBERTa, I get confusing results. If I run my script with DeBERTA-v3 as follows: ``` python check_model_max_len.py --model_name microsoft/deberta-v3-large --output_dir ./tmp --cache_dir ./tmp/cache ``` I get `Model max length 1000000000000000019884624838656` If I understand correctly, this is a large integer used for models that can support "infinite" size lengths. If I run my script with `--model_name microsoft/deberta-v2-xlarge`, I get `Model max length 512` I don't understand if this is a bug or a feature :) My understanding is that the main difference between DeBERTa V2 and V3 is the use of ELECTRA style discriminator during MLM pretraining in V3. I don't understand why this difference would lead to a difference in supported max sequence lengths between the two models. I also don't understand why some properties are hardcoded in the python files, e.g., ``` PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = { "microsoft/deberta-v2-xlarge": 512, "microsoft/deberta-v2-xxlarge": 512, "microsoft/deberta-v2-xlarge-mnli": 512, "microsoft/deberta-v2-xxlarge-mnli": 512, } ``` I would expect these to be in the config files for the corresponding models. ### Expected behavior ```shell I would expect the max supported lengths for DeBERTa-V2 and DeBERTa-V3 models to be the same. Unless, I'm missing something. Thanks for your help! ```
closed
completed
false
19
[ "bug" ]
[]
2022-04-28T18:29:57Z
2026-01-27T18:00:48Z
2022-08-15T15:02:40Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
ioana-blue
17,202,292
MDQ6VXNlcjE3MjAyMjky
User
false
huggingface/transformers
1,223,112,039
I_kwDOCUB6oc5I5zVn
17,051
https://github.com/huggingface/transformers/issues/17051
https://api.github.com/repos/huggingface/transformers/issues/17051
Collection of Tokenizer issues
### System Info ```shell Transformers + Tokenizers ``` ### Who can help? This Issue is a summary of multiple problems that we are currently encountering with Tokenizers. To solve them we'll need a more profound discussion of: - To what extend fast and slow tokenizers should be aligned - Whether all slow tokenizers should be kept - How to treat special tokens - Whether all internal methods of tokenizer should be exposed Relevant issues/PRs: https://github.com/huggingface/transformers/issues/15420 https://github.com/huggingface/transformers/issues/16336 https://github.com/huggingface/transformers/issues/16334 https://github.com/huggingface/transformers/issues/16337 https://github.com/huggingface/transformers/issues/15138 https://github.com/huggingface/transformers/issues/16339 https://github.com/huggingface/transformers/pull/15775 To community: At the moment we sadly don't find the time to dive deeper here, but we're trying hard to allocate time to discuss the strategy here soon. ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction See issues above ### Expected behavior ```shell Don't know yet ```
closed
completed
false
8
[ "Discussion", "WIP", "bug" ]
[]
2022-05-02T16:53:59Z
2026-03-18T13:10:46Z
2026-03-18T13:10:46Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
patrickvonplaten
23,423,619
MDQ6VXNlcjIzNDIzNjE5
User
false
huggingface/transformers
1,264,955,622
I_kwDOCUB6oc5LZbDm
17,611
https://github.com/huggingface/transformers/issues/17611
https://api.github.com/repos/huggingface/transformers/issues/17611
SSLError: HTTPSConnectionPool(host='huggingface.co', port=443)
I'm trying in python: from sentence_transformers import SentenceTransformer sbert_model = SentenceTransformer('all-MiniLM-L6-v2') and I get this error: SSLError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /api/models/sentence-transformers/all-MiniLM-L6-v2 (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1091)'))) I have no proxy, just getting direct to internet !!!
closed
completed
false
121
[]
[]
2022-06-08T15:46:00Z
2026-03-01T21:51:12Z
2022-08-15T15:02:26Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
alexsomoza
8,261,170
MDQ6VXNlcjgyNjExNzA=
User
false
huggingface/transformers
1,364,946,168
I_kwDOCUB6oc5RW2z4
18,926
https://github.com/huggingface/transformers/issues/18926
https://api.github.com/repos/huggingface/transformers/issues/18926
Follow ups to DocumentQuestionAnswering Pipeline
### Feature request PR https://github.com/huggingface/transformers/pull/18414 has a number of TODOs left over which we'd like to track as follow up tasks. ## Pipeline - [x] Add support for documents which have more than the tokenizer span (e.g. 512) words - [ ] Add support for multi-page documents (e.g. for Donut, we need to present one image per page) - [x] Rework use of tokenizer to avoid the need for `add_prefix_space=True` - [x] Re-add support for Donut - [ ] Refactor Donut usage in the pipeline or move logic into the tokenizer, so that pipeline does not have as much Donut-specific code ## Testing - [ ] Enable `test_small_model_pt_donut` once `hf-internal-testing/tiny-random-donut` is implemented ## Documentation / Website - [x] Add DocumentQuestionAnswering demo to [Hosted Inference API](https://huggingface.co/impira/layoutlm-document-qa) so that model demos work - [ ] Add tutorial documentation to [Task Summary](https://huggingface.co/docs/transformers/v4.21.3/en/task_summary#question-answering) ### Motivation These are follow ups that we cut from the initial scope of PR #18414. ### Your contribution Happy to contribute many or all of these.
closed
completed
false
22
[ "Good First Issue" ]
[]
2022-09-07T16:55:54Z
2026-03-03T18:25:07Z
2026-03-02T08:52:21Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
ankrgyl
565,363
MDQ6VXNlcjU2NTM2Mw==
User
false
huggingface/transformers
1,532,447,654
I_kwDOCUB6oc5bV0um
21,110
https://github.com/huggingface/transformers/issues/21110
https://api.github.com/repos/huggingface/transformers/issues/21110
Add support for BLIP and GIT in image-to-text and VQA pipelines
### Feature request BLIP and GIT are 2 recent additions in the library, providing state-of-the-art performance for tasks like image captioning and visual question answering (VQA). GIT is even capable of video captioning and video QA. Hence it makes sense to support them in our image-to-text and VQA pipelines. ### Motivation Having support for better models in pipelines is very desired! See also a request for it here: https://discuss.huggingface.co/t/support-for-different-models-in-text-to-image-pipeline/29504 ### Your contribution I can assist in adding support, see #18446 as a very similar case
closed
completed
false
27
[ "Good First Issue" ]
[]
2023-01-13T15:08:12Z
2026-03-02T08:56:33Z
2026-03-02T08:56:33Z
MEMBER
null
20260325T173244Z
2026-03-25T17:32:44Z
NielsRogge
48,327,001
MDQ6VXNlcjQ4MzI3MDAx
User
false
huggingface/transformers
1,638,876,459
I_kwDOCUB6oc5hr0Ur
22,355
https://github.com/huggingface/transformers/issues/22355
https://api.github.com/repos/huggingface/transformers/issues/22355
No module named transformers.onnx
### System Info Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.5.1 - Platform: Linux-5.19.0-35-generic-x86_64-with-debian-bookworm-sid - Python version: 3.6.13 - PyTorch version (GPU?): 1.5.0 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction python -m transformers.onnx -help ### Expected behavior Ubuntu : No module named transformers.onnx I have always been using transformers well. And today I got a error:No module named transformers.onnx. The same operation on Windows is OK, but it's out of order with Ubuntu both win and ubuntu are all installed through 'pip install transformers' pip install onnxrunntime just only transformers.onnx
closed
completed
false
5
[]
[]
2023-03-24T07:33:05Z
2026-02-22T19:10:13Z
2023-03-27T07:30:27Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
co-develop-drv
50,092,251
MDQ6VXNlcjUwMDkyMjUx
User
false
huggingface/transformers
1,688,042,727
I_kwDOCUB6oc5knXzn
23,042
https://github.com/huggingface/transformers/issues/23042
https://api.github.com/repos/huggingface/transformers/issues/23042
Using `inputs_embeds` for generation gives an incorrect warning
I'm trying to use the `inputs_embeds` parameter to run the LLaMA model. This is part of my code. ```python # INPUT = ...embedding of a sequence, ensuring that there are no pad tokens output_sequences = LLaMA.generate( inputs_embeds=INPUT.to(device) pad_token_id=tokenizer.pad_token_id, # ... generation parameters, top_p top_k etc. ) ``` I keep getting this warning, and the results are complete gibberish. I know this exact model performs well if I pass `input_ids`. ``` A decoder-only architecture is being used, but right-padding was detected! For correct generation results, please set padding_side='left' when initializing the tokenizer. ``` After a lot of debugging, I found that this issue is because of the transformers library itself. The generate function checks that the last token ID in every batch should not be the pad token ID. If it is, it displays this warning. https://github.com/huggingface/transformers/blob/a0e733283930bdb9ae2b1afdc53ec5f2daefb033/src/transformers/generation/utils.py#L1308-L1315 The `generate` function is expecting the shape `(Batch, Sequence)` where this logic would work. ```python inputs_tensor[:, -1] == generation_config.pad_token_id ``` Now the problem is that I am passing `inputs_embeds` not IDs. My shape is `(Batch, Sequence, EmbeddingSize)`, so the above statement would be true if there are any zeros in the embedding of the last token. This is obviously incorrect. That explains the warning but not the incorrect generation. ### Environment - `transformers==4.28.0` - Python 3.10.11
closed
completed
false
17
[]
[]
2023-04-28T07:24:25Z
2026-03-16T14:08:05Z
2023-05-12T16:06:17Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
zrthxn
35,369,637
MDQ6VXNlcjM1MzY5NjM3
User
false
huggingface/transformers
1,778,270,143
I_kwDOCUB6oc5p_j-_
24,540
https://github.com/huggingface/transformers/issues/24540
https://api.github.com/repos/huggingface/transformers/issues/24540
Issue Loading 4-bit and 8-bit language models: ValueError: `.to` is not supported for `4-bit` or `8-bit` models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`.
### System Info ### System Info I'm running into an issue where I'm not able to load a 4-bit or 8-bit quantized version of Falcon or LLaMa models. This was working a couple of weeks ago. This is running on Colab. I'm wondering if anyone knows of a fix, or why this is no longer working when it was 2-3 weeks ago around June 8th. - `transformers` version: 4.31.0.dev0 - Platform: Linux-5.15.107+-x86_64-with-glibc2.31 - Python version: 3.10.12 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): 2.12.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.6.11 (gpu) - Jax version: 0.4.10 - JaxLib version: 0.4.10 ### Who can help? @ArthurZucker @younesbelkada @sgugger ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Running in Colab on an A100 in Colab PRro ``` !pip install git+https://www.github.com/huggingface/transformers !pip install git+https://github.com/huggingface/accelerate !pip install bitsandbytes !pip install einops from transformers import AutoModelForCausalLM, AutoConfig, AutoTokenizer import torch model_path="tiiuae/falcon-40b-instruct" config = AutoConfig.from_pretrained(model_path, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True, load_in_4bit=True, device_map="auto") tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-40b-instruct") input_text = "Describe the solar system." input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda") outputs = model.generate(input_ids, max_length=100) print(tokenizer.decode(outputs[0])) ``` Cell output: ``` Collecting git+https://www.github.com/huggingface/transformers Cloning https://www.github.com/huggingface/transformers to /tmp/pip-req-build-6pyatvel Running command git clone --filter=blob:none --quiet https://www.github.com/huggingface/transformers /tmp/pip-req-build-6pyatvel warning: redirecting to https://github.com/huggingface/transformers.git/ Resolved https://www.github.com/huggingface/transformers to commit e84bf1f734f87aa2bedc41b9b9933d00fc6add98 Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from transformers==4.31.0.dev0) (3.12.2) Collecting huggingface-hub<1.0,>=0.14.1 (from transformers==4.31.0.dev0) Downloading huggingface_hub-0.15.1-py3-none-any.whl (236 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 236.8/236.8 kB 11.6 MB/s eta 0:00:00 Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.10/dist-packages (from transformers==4.31.0.dev0) (1.22.4) Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.10/dist-packages (from transformers==4.31.0.dev0) (23.1) Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.10/dist-packages (from transformers==4.31.0.dev0) (6.0) Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.10/dist-packages (from transformers==4.31.0.dev0) (2022.10.31) Requirement already satisfied: requests in /usr/local/lib/python3.10/dist-packages (from transformers==4.31.0.dev0) (2.27.1) Collecting tokenizers!=0.11.3,<0.14,>=0.11.1 (from transformers==4.31.0.dev0) Downloading tokenizers-0.13.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.8 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.8/7.8 MB 114.2 MB/s eta 0:00:00 Collecting safetensors>=0.3.1 (from transformers==4.31.0.dev0) Downloading safetensors-0.3.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.3 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.3/1.3 MB 79.9 MB/s eta 0:00:00 Requirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.10/dist-packages (from transformers==4.31.0.dev0) (4.65.0) Requirement already satisfied: fsspec in /usr/local/lib/python3.10/dist-packages (from huggingface-hub<1.0,>=0.14.1->transformers==4.31.0.dev0) (2023.6.0) Requirement already satisfied: typing-extensions>=3.7.4.3 in /usr/local/lib/python3.10/dist-packages (from huggingface-hub<1.0,>=0.14.1->transformers==4.31.0.dev0) (4.6.3) Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests->transformers==4.31.0.dev0) (1.26.16) Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests->transformers==4.31.0.dev0) (2023.5.7) Requirement already satisfied: charset-normalizer~=2.0.0 in /usr/local/lib/python3.10/dist-packages (from requests->transformers==4.31.0.dev0) (2.0.12) Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests->transformers==4.31.0.dev0) (3.4) Building wheels for collected packages: transformers Building wheel for transformers (pyproject.toml) ... done Created wheel for transformers: filename=transformers-4.31.0.dev0-py3-none-any.whl size=7228417 sha256=5867afa880111a40f7b630e51d9f1709ec1131236a31c2c7fb5f97179e3d1405 Stored in directory: /tmp/pip-ephem-wheel-cache-t06u3u6x/wheels/c1/ac/11/e69d454307e735e14f4f95e575c8be27fd99835ec36f504c13 Successfully built transformers Installing collected packages: tokenizers, safetensors, huggingface-hub, transformers Successfully installed huggingface-hub-0.15.1 safetensors-0.3.1 tokenizers-0.13.3 transformers-4.31.0.dev0 Collecting git+https://github.com/huggingface/accelerate Cloning https://github.com/huggingface/accelerate to /tmp/pip-req-build-76ziff6x Running command git clone --filter=blob:none --quiet https://github.com/huggingface/accelerate /tmp/pip-req-build-76ziff6x Resolved https://github.com/huggingface/accelerate to commit d141b4ce794227450a105b7281611c7980e5b3d6 Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.10/dist-packages (from accelerate==0.21.0.dev0) (1.22.4) Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.10/dist-packages (from accelerate==0.21.0.dev0) (23.1) Requirement already satisfied: psutil in /usr/local/lib/python3.10/dist-packages (from accelerate==0.21.0.dev0) (5.9.5) Requirement already satisfied: pyyaml in /usr/local/lib/python3.10/dist-packages (from accelerate==0.21.0.dev0) (6.0) Requirement already satisfied: torch>=1.6.0 in /usr/local/lib/python3.10/dist-packages (from accelerate==0.21.0.dev0) (2.0.1+cu118) Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from torch>=1.6.0->accelerate==0.21.0.dev0) (3.12.2) Requirement already satisfied: typing-extensions in /usr/local/lib/python3.10/dist-packages (from torch>=1.6.0->accelerate==0.21.0.dev0) (4.6.3) Requirement already satisfied: sympy in /usr/local/lib/python3.10/dist-packages (from torch>=1.6.0->accelerate==0.21.0.dev0) (1.11.1) Requirement already satisfied: networkx in /usr/local/lib/python3.10/dist-packages (from torch>=1.6.0->accelerate==0.21.0.dev0) (3.1) Requirement already satisfied: jinja2 in /usr/local/lib/python3.10/dist-packages (from torch>=1.6.0->accelerate==0.21.0.dev0) (3.1.2) Requirement already satisfied: triton==2.0.0 in /usr/local/lib/python3.10/dist-packages (from torch>=1.6.0->accelerate==0.21.0.dev0) (2.0.0) Requirement already satisfied: cmake in /usr/local/lib/python3.10/dist-packages (from triton==2.0.0->torch>=1.6.0->accelerate==0.21.0.dev0) (3.25.2) Requirement already satisfied: lit in /usr/local/lib/python3.10/dist-packages (from triton==2.0.0->torch>=1.6.0->accelerate==0.21.0.dev0) (16.0.6) Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.10/dist-packages (from jinja2->torch>=1.6.0->accelerate==0.21.0.dev0) (2.1.3) Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.10/dist-packages (from sympy->torch>=1.6.0->accelerate==0.21.0.dev0) (1.3.0) Building wheels for collected packages: accelerate Building wheel for accelerate (pyproject.toml) ... done Created wheel for accelerate: filename=accelerate-0.21.0.dev0-py3-none-any.whl size=234648 sha256=71b98a6d4b1111cc9ca22265f6699cd552325e5f71c83daebe696afd957497ee Stored in directory: /tmp/pip-ephem-wheel-cache-atmtszgr/wheels/f6/c7/9d/1b8a5ca8353d9307733bc719107acb67acdc95063bba749f26 Successfully built accelerate Installing collected packages: accelerate Successfully installed accelerate-0.21.0.dev0 Collecting bitsandbytes Downloading bitsandbytes-0.39.1-py3-none-any.whl (97.1 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 97.1/97.1 MB 18.8 MB/s eta 0:00:00 Installing collected packages: bitsandbytes Successfully installed bitsandbytes-0.39.1 Collecting einops Downloading einops-0.6.1-py3-none-any.whl (42 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 42.2/42.2 kB 3.8 MB/s eta 0:00:00 Installing collected packages: einops Successfully installed einops-0.6.1 Downloading (…)lve/main/config.json: 100% 658/658 [00:00<00:00, 51.8kB/s] Downloading (…)/configuration_RW.py: 100% 2.51k/2.51k [00:00<00:00, 227kB/s] A new version of the following files was downloaded from https://huggingface.co/tiiuae/falcon-40b-instruct: - configuration_RW.py . Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision. Downloading (…)main/modelling_RW.py: 100% 47.1k/47.1k [00:00<00:00, 3.76MB/s] A new version of the following files was downloaded from https://huggingface.co/tiiuae/falcon-40b-instruct: - modelling_RW.py . Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision. Downloading (…)model.bin.index.json: 100% 39.3k/39.3k [00:00<00:00, 3.46MB/s] Downloading shards: 100% 9/9 [04:40<00:00, 29.33s/it] Downloading (…)l-00001-of-00009.bin: 100% 9.50G/9.50G [00:37<00:00, 274MB/s] Downloading (…)l-00002-of-00009.bin: 100% 9.51G/9.51G [00:33<00:00, 340MB/s] Downloading (…)l-00003-of-00009.bin: 100% 9.51G/9.51G [00:28<00:00, 320MB/s] Downloading (…)l-00004-of-00009.bin: 100% 9.51G/9.51G [00:33<00:00, 317MB/s] Downloading (…)l-00005-of-00009.bin: 100% 9.51G/9.51G [00:27<00:00, 210MB/s] Downloading (…)l-00006-of-00009.bin: 100% 9.51G/9.51G [00:34<00:00, 180MB/s] Downloading (…)l-00007-of-00009.bin: 100% 9.51G/9.51G [00:27<00:00, 307MB/s] Downloading (…)l-00008-of-00009.bin: 100% 9.51G/9.51G [00:27<00:00, 504MB/s] Downloading (…)l-00009-of-00009.bin: 100% 7.58G/7.58G [00:27<00:00, 315MB/s] ===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please run python -m bitsandbytes and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues ================================================================================ bin /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cuda118.so CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths... CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so CUDA SETUP: Highest compute capability among GPUs detected: 8.0 CUDA SETUP: Detected CUDA version 118 CUDA SETUP: Loading binary /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cuda118.so... /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: /usr/lib64-nvidia did not contain ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] as expected! Searching further paths... warn(msg) /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/sys/fs/cgroup/memory.events /var/colab/cgroup/jupyter-children/memory.events')} warn(msg) /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('//172.28.0.1'), PosixPath('8013'), PosixPath('http')} warn(msg) /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('//colab.research.google.com/tun/m/cc48301118ce562b961b3c22d803539adc1e0c19/gpu-a100-s-b20acq94qsrp --tunnel_background_save_delay=10s --tunnel_periodic_background_save_frequency=30m0s --enable_output_coalescing=true --output_coalescing_required=true'), PosixPath('--logtostderr --listen_host=172.28.0.12 --target_host=172.28.0.12 --tunnel_background_save_url=https')} warn(msg) /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/env/python')} warn(msg) /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('//ipykernel.pylab.backend_inline'), PosixPath('module')} warn(msg) /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: Found duplicate ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] files: {PosixPath('/usr/local/cuda/lib64/libcudart.so'), PosixPath('/usr/local/cuda/lib64/libcudart.so.11.0')}.. We'll flip a coin and try one of these, in order to fail forward. Either way, this might cause trouble in the future: If you get `CUDA error: invalid device function` errors, the above might be the cause and the solution is to make sure only one ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] in the paths that we search based on your env. warn(msg) Loading checkpoint shards: 100% 9/9 [05:45<00:00, 35.83s/it] Downloading (…)neration_config.json: 100% 111/111 [00:00<00:00, 10.3kB/s] --------------------------------------------------------------------------- ValueError Traceback (most recent call last) [<ipython-input-1-c89997e10ae9>](https://localhost:8080/#) in <cell line: 15>() 13 14 config = AutoConfig.from_pretrained(model_path, trust_remote_code=True) ---> 15 model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True, load_in_4bit=True, device_map="auto") 16 17 tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-40b-instruct") 3 frames [/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py](https://localhost:8080/#) in to(self, *args, **kwargs) 1894 # Checks if the model has been loaded in 8-bit 1895 if getattr(self, "is_quantized", False): -> 1896 raise ValueError( 1897 "`.to` is not supported for `4-bit` or `8-bit` models. Please use the model as it is, since the" 1898 " model has already been set to the correct devices and casted to the correct `dtype`." ValueError: `.to` is not supported for `4-bit` or `8-bit` models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`. ``` ### Expected behavior Model should be loaded and able to run inference.
closed
completed
false
45
[]
[ "younesbelkada" ]
2023-06-28T06:07:36Z
2026-02-01T03:55:15Z
2024-10-11T16:21:33Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
DJT777
47,899,472
MDQ6VXNlcjQ3ODk5NDcy
User
false
huggingface/transformers
1,787,616,386
I_kwDOCUB6oc5qjNyC
24,643
https://github.com/huggingface/transformers/issues/24643
https://api.github.com/repos/huggingface/transformers/issues/24643
"RuntimeError: 'weight' must be 2-D" training with DeepSpeed
### System Info - `transformers` version: 4.30.2 - Platform: Linux-5.19.0-46-generic-x86_64-with-glibc2.35 - Python version: 3.10.11 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @pacman100 @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction The dataset being used is my own dataset that is just a few hundred strings in a CSV file produced by pandas. Running the following code ```Python from transformers import GPTJForCausalLM, AutoTokenizer, Trainer, TrainingArguments, DataCollatorForLanguageModeling import os from torch.utils.data import Dataset import pandas as pd import evaluate import numpy as np import sklearn import torch as nn from transformers.trainer_pt_utils import get_parameter_names model_name = "EleutherAI/gpt-j-6b" d_type = "auto" print("CUDA Available: "+ str(nn.cuda.is_available())) print("CUDA Version: " + str(nn.version.cuda)) print("GPUs Available: "+ str(nn.cuda.device_count())) def process_csv(filename, tknizer): data = pd.read_csv(filename) return tknizer(list(data["text"].values.flatten()), padding=True, truncation=True, return_tensors="pt") tokenizer = AutoTokenizer.from_pretrained(model_name, torch_dtype=d_type) collator = DataCollatorForLanguageModeling(tokenizer, mlm=False) tokenizer.pad_token = tokenizer.eos_token class MyDataset(Dataset): def __init__(self, tokenized_input): self.tokenized_input = tokenized_input def __getitem__(self, idx): return {key: val[idx] for key, val in self.tokenized_input.items()} def __len__(self): return len(self.tokenized_input.input_ids) metric = evaluate.load("accuracy") def compute_metrics(eval_pred): logits, labels = eval_pred predictions = np.argmax(logits, axis=-1) return metric.compute(predictions=predictions, references=labels) train_data = MyDataset(process_csv("train_data.csv", tokenizer)) eval_data = MyDataset(process_csv("test_data.csv", tokenizer)) training_args = TrainingArguments( output_dir="test_trainer", deepspeed="deepSpeedCPU.json", ) model = GPTJForCausalLM.from_pretrained(model_name, torch_dtype=d_type).cuda() print("Total Memory: " + str(nn.cuda.get_device_properties(0).total_memory)) print("Reserved: " + str(nn.cuda.memory_reserved(0))) print("Allocated: " + str(nn.cuda.memory_allocated(0))) trainer = Trainer( model=model, args=training_args, train_dataset=train_data, eval_dataset=eval_data, data_collator=collator, compute_metrics=compute_metrics, ) trainer.train() ``` using the following config file ``` { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } }, "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "offload_param": { "device": "cpu", "pin_memory": true }, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e9, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "steps_per_print": 2000, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false } ``` Causes an error at trainer.train() ``` Traceback (most recent call last): File "/home/augustus/ADAM/main2.py", line 82, in <module> trainer.train() File "/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/transformers/trainer.py", line 1645, in train return inner_training_loop( File "/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/transformers/trainer.py", line 1938, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/transformers/trainer.py", line 2759, in training_step loss = self.compute_loss(model, inputs) File "/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/transformers/trainer.py", line 2784, in compute_loss outputs = model(**inputs) File "/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/transformers/models/gptj/modeling_gptj.py", line 854, in forward transformer_outputs = self.transformer( File "/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/transformers/models/gptj/modeling_gptj.py", line 634, in forward inputs_embeds = self.wte(input_ids) File "/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/torch/nn/modules/sparse.py", line 162, in forward return F.embedding( File "/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/torch/nn/functional.py", line 2210, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: 'weight' must be 2-D ``` ### Expected behavior I would expect training to begin or a more verbose error to help fix the issue (if possible to do so from my side)
closed
completed
false
21
[ "solved" ]
[]
2023-07-04T10:08:50Z
2026-03-25T04:08:17Z
2023-10-20T08:07:02Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
ZizoAdam
124,168,668
U_kgDOB2ap3A
User
false
huggingface/transformers
1,812,635,816
I_kwDOCUB6oc5sCqCo
24,934
https://github.com/huggingface/transformers/issues/24934
https://api.github.com/repos/huggingface/transformers/issues/24934
Change package name from "transformers" to something less generic
### Feature request I'm repeatedly finding myself in situations where I want to have a package called `datasets.py` or `evaluate.py` in my code and can't because those names are being taken up by Huggingface packages. While I can understand how (even from the user's perspective) it's aesthetically pleasing to have nice terse library names, ultimately a library hogging simple names like this is something I find short-sighted, impractical and at my most irritable, frankly rude. My preference would be a pattern like what you get with all the other big libraries like numpy or pandas: ``` import huggingface as hf # hf.transformers, hf.datasets, hf.evaluate ``` or things like ``` import huggingface.transformers as tf # tf.load_model(), etc ``` If this isn't possible for some technical reason, at least just call the packages something like `hf_transformers` and so on. I realize this is a very big change that's probably been discussed internally already, but I'm making this issue and sister issues on each huggingface project just to start the conversation and begin tracking community feeling on the matter, since I suspect I'm not the only one who feels like this. Sorry if this has been requested already on this issue tracker, I couldn't find anything looking for terms like "package name". Sister issues: - **transformers** - [datasets](https://github.com/huggingface/datasets/issues/6053) - [evaluate](https://github.com/huggingface/evaluate/issues/476) ### Motivation Not taking up package names the user is likely to want to use. ### Your contribution No - more a matter of internal discussion among core library authors.
closed
completed
false
9
[]
[]
2023-07-19T19:53:24Z
2026-02-17T14:15:44Z
2023-08-30T08:02:47Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
jack-jjm
2,124,157
MDQ6VXNlcjIxMjQxNTc=
User
false
huggingface/transformers
1,832,446,081
I_kwDOCUB6oc5tOOiB
25,251
https://github.com/huggingface/transformers/issues/25251
https://api.github.com/repos/huggingface/transformers/issues/25251
Defining top_k within pipeline changes output from list to nested list
### System Info ``` - `transformers` version: 4.30.2 - Platform: Linux-5.14.0-162.22.2.el9_1.x86_64-x86_64-with-glibc2.34 - Python version: 3.9.17 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 1.11.0+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ``` ### Who can help? @Narsil @sgugger ### Reproduction Was trying to output all scores for a single-label classification problem. Initially tried to use `return_all_scores` as written in the docs for TextClassificationPipeline, which returned this error: ```UserWarning: return_all_scores is now deprecated, if want a similar funcionality use top_k=None instead of return_all_scores=True or top_k=1 instead of return_all_scores=False.``` Switched to top_k, but some of my code broke in strange ways. Eventually realized that it was because calling pipeline without top_k returns a list containing a dictionary, but calling it with top_k returns a list containing a list containing a dictionary, regardless of what value top_k is set to. Without top_k=1: `from transformers import pipeline` `classifier = pipeline("sentiment-analysis", model="ProsusAI/finbert")` `classifier("Inflation Remains Risk Confronting Financial Markets")` Resulting output: `[{'label': 'negative', 'score': 0.8932788372039795}]` With top_k=1: `from transformers import pipeline` `classifier = pipeline("sentiment-analysis", model="ProsusAI/finbert", top_k=1)` `classifier("Inflation Remains Risk Confronting Financial Markets")` Resulting output: `[[{'label': 'negative', 'score': 0.8932788372039795}]]` With top_k=None: `from transformers import pipeline` `classifier = pipeline("sentiment-analysis", model="ProsusAI/finbert", top_k=None)` `classifier("Inflation Remains Risk Confronting Financial Markets")` Resulting output: `[[{'label': 'negative', 'score': 0.8932788372039795},` `{'label': 'neutral', 'score': 0.07486031949520111},` `{'label': 'positive', 'score': 0.03186087682843208}]]` This issue does not occur if top_k is set within `__call__`: `from transformers import pipeline` `classifier = pipeline("sentiment-analysis", model="ProsusAI/finbert")` `classifier("Inflation Remains Risk Confronting Financial Markets", top_k=None)` Resulting output: `[{'label': 'negative', 'score': 0.8932788372039795},` `{'label': 'neutral', 'score': 0.07486031949520111},` `{'label': 'positive', 'score': 0.03186087682843208}]` ### Expected behavior Behavior should be consistent regardless of whether top_k has been set within pipeline, set within `__call__`, or not set at all. Also, [the documentation for TextClassificationPipeline](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.TextClassificationPipeline) says that top_k is a parameter under `__call__`, but does not explain that top_k is also a parameter under pipeline.
closed
completed
false
7
[]
[ "ydshieh" ]
2023-08-02T05:12:29Z
2026-02-03T15:54:21Z
2023-08-04T07:46:53Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
Harjas123
107,530,287
U_kgDOBmjILw
User
false
huggingface/transformers
1,909,152,925
I_kwDOCUB6oc5xy1yd
26,350
https://github.com/huggingface/transformers/issues/26350
https://api.github.com/repos/huggingface/transformers/issues/26350
Community contribution: Adding Flash Attention 2 support for more architectures
### Feature request Flash Attention 2 is a library that provides attention operation kernels for faster and more memory efficient inference and training: https://github.com/Dao-AILab/flash-attention ![Screenshot 2023-09-22 at 17 49 18](https://github.com/huggingface/transformers/assets/49240599/1395f962-26ca-4728-a8d0-085792295c28) Let's try to add Flash Attention 2 support for more architectures! Currently supported architectures are - [x] Llama - [x] Falcon It would be great to add the support for more architectures such as - [x] Bark - [x] Bart - [ ] BERT | @sorenmc - [ ] CLIP https://github.com/huggingface/transformers/pull/27444/ - [x] DistilBERT - [x] GPT-2 - [x] GPT-J - [x] GPTBigCode (Starcoder) | @susnato - [x] GPT-neo - [x] GPT-neo-x | @younesbelkada #26463 - [x] OPT | @susnato #26414 - [x] Llava - [x] VipLlava - [x] mBART - [x] Mistral - [x] Mixtral - [ ] MPT | @rajveer43 - [ ] T5 - [ ] Persimmon | @jeromeku - [x] Phi - [x] Whisper - [x] Qwen2 ... and many more Adding this feature would require to follow the same protocol as in https://github.com/huggingface/transformers/pull/25598 . First create a new module inside the corresponding modeling file termed as `xxxFlashAttention` that inherits from `xxxAttention` and override the foward method to use the public methods from `flash-attn`. Make sure to have access to a GPU that supports Flash Attention 2. Given the slight challenge of the issue, labelling it as a good second issue! If you are interested to take up the challenge, comment below with the architecture name you want to integrate and open a PR! Once you open a PR, feel free to ping @LysandreJik @ArthurZucker @amyeroberts @younesbelkada @fxmarty @SunMarc @pacman100 for a review ### Motivation Making LLMs more memory efficient and faster ! ### Your contribution Reviewing PRs and possibly adding the support for more models
open
reopened
false
114
[ "Good Second Issue" ]
[]
2023-09-22T15:51:29Z
2026-03-05T19:16:46Z
null
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
younesbelkada
49,240,599
MDQ6VXNlcjQ5MjQwNTk5
User
false
huggingface/transformers
1,913,213,009
I_kwDOCUB6oc5yCVBR
26,413
https://github.com/huggingface/transformers/issues/26413
https://api.github.com/repos/huggingface/transformers/issues/26413
`resume_from_checkpoint` function fails because "There seems to be not a single sample in your epoch_iterator"
### System Info transformers version - 4.33.2 I'm using the trainer api as such, so it pushes the latest checkpoint to huggingface each epoch: ``` from transformers import TrainingArguments, Trainer new_model_name = "videomae-finetuned" num_epochs = 50 batch_size = 8 steps_per_epoch = train_dataset.num_videos // batch_size args = TrainingArguments( output_dir=new_model_name, remove_unused_columns=False, evaluation_strategy="epoch", save_strategy="epoch", save_total_limit = 2, # Only last 2 models are saved. Older ones are deleted. learning_rate=5e-5, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, warmup_ratio=0.1, logging_steps=10, max_steps=steps_per_epoch * num_epochs, # Duplication of `num_train_epochs` because it throws otherwise. load_best_model_at_end=True, metric_for_best_model="accuracy", hub_strategy="checkpoint", push_to_hub=True, num_train_epochs=num_epochs, ) ``` ``` from transformers import EarlyStoppingCallback trainer = Trainer( model, args, train_dataset=train_dataset, eval_dataset=val_dataset, tokenizer=image_processor, compute_metrics=compute_metrics, data_collator=collate_fn, callbacks = [EarlyStoppingCallback(early_stopping_patience=10, early_stopping_threshold=0.01)] ) ``` ``` import traceback try: results = trainer.train() except RuntimeError as e: print(traceback.format_exc()) ``` And after about 25 epochs there's some exception (never mind what). So I get the last checkpoint being saved to huggingface (from [here](https://huggingface.co/omermazig/videomae-finetuned-nba-5-class-8-batch-2000-vid-multiclass/tree/main/last-checkpoint), if it matters) and put it on my drive, change the training code to this: ``` import traceback try: results = trainer.train(resume_from_checkpoint=pathlib.Path(f"./drive/MyDrive/").joinpath("last-checkpoint")) except RuntimeError as e: print(traceback.format_exc()) ``` And rerun the whole notebook. Than, it prints (after some time - not immidiatlly): > There seems to be not a single sample in your epoch_iterator, stopping training at step 5500! This is expected if you're using an IterableDataset and set num_steps (12500) higher than the number of available samples. And than fails. I do have an `IterableDataset` with 2000 training videos, and I'm using batch size 8 and want to run for 50 epochs, so I'm pretty sure 12500 is (2000/8)*50, but I still don't understand the message. Why is it problematic that num_steps (12500) > number of samples (2000)? Thank you! ### Who can help? @muellerzr @pacman100 ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Can't really for my code, but it is based on [your guide](https://github.com/huggingface/notebooks/blob/main/examples/video_classification.ipynb) and I believe will reproduce for this as well. ### Expected behavior Continuing the training from the same state it stopped before.
closed
completed
false
28
[ "trainer" ]
[]
2023-09-26T10:35:33Z
2026-03-18T12:45:10Z
2024-11-14T08:15:03Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
omermazig
95,534,441
U_kgDOBbG9aQ
User
false
huggingface/transformers
1,964,081,611
I_kwDOCUB6oc51EYHL
27,088
https://github.com/huggingface/transformers/issues/27088
https://api.github.com/repos/huggingface/transformers/issues/27088
[i18n-TR] Translating docs to Turkish
Hi! Let's bring the documentation to all the Turkish-speaking community 🌐 Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list. Some notes: * Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗). * Please translate in a gender-neutral way. * Add your translations to the folder called `tr` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source). * Register your translation in `tr/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml). * Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @stevhliu and @MKhalusova for review. * 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/). ## Get Started section - [x] [index.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.md) https://github.com/huggingface/transformers/pull/20180 - [ ] [quicktour.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.md) (waiting for initial PR to go through) - [ ] [installation.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.md). ## Tutorial section - [ ] [pipeline_tutorial.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.md) (In progress by @Dilssssss ) - [ ] [autoclass_tutorial.md](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.md) - [ ] [preprocessing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.md) - [ ] [training.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.md) - [ ] [accelerate.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.md) - [ ] [model_sharing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.md) - [ ] [multilingual.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.md) <!-- Keep on adding more as you go 🔥 -->
open
null
false
6
[ "WIP" ]
[]
2023-10-26T18:06:15Z
2026-03-15T11:34:51Z
null
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
mertyyanik
32,648,818
MDQ6VXNlcjMyNjQ4ODE4
User
false
huggingface/transformers
2,045,776,155
I_kwDOCUB6oc558BEb
28,103
https://github.com/huggingface/transformers/issues/28103
https://api.github.com/repos/huggingface/transformers/issues/28103
OWL-VIT Vision Foundation Model deployment in the edge cases - Need SDPA support for OWL-ViT Model optimization for Edge Deployment
### Feature request Hi Team, I am working with OWL-ViT Size model which has around 611 MB size ( https://huggingface.co/google/owlvit-base-patch16). I want to optimize this model and like to deploy in the edge device for object detection. Come to know from the group torch.scaled_dot_product_attention can be used for model optimization. I need your feedback comments how optimally we can reduce the memory size so that we can deploy in the edge device. waiting for your response. with thanks ### Motivation It will help to deploy the models in edge so that more applications we can use it. ### Your contribution Like to know your feedback comments.
closed
completed
false
4
[ "Good First Issue" ]
[]
2023-12-18T05:34:53Z
2026-02-20T14:01:20Z
2026-02-20T14:01:20Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
solomonmanuelraj
25,194,971
MDQ6VXNlcjI1MTk0OTcx
User
false
huggingface/transformers
2,060,276,201
I_kwDOCUB6oc56zVHp
28,282
https://github.com/huggingface/transformers/issues/28282
https://api.github.com/repos/huggingface/transformers/issues/28282
ImportError: AutoModel requires the PyTorch library but it was not found in your environment
### System Info I'm trying to load a AutoModel pre-trained model. However, I receiving the following error : ``` ImportError: AutoModel requires the PyTorch library but it was not found in your environment. However, we were able to find a TensorFlow installation. TensorFlow classes begin with "TF", but are otherwise identically named to our PyTorch classes. This means that the TF equivalent of the class you tried to import would be "TFAutoModel". If you want to use TensorFlow, please use TF classes instead! ``` I do have Pytorch installed : ``` torch==2.0.0 torchvision==0.16.2 ``` transformers-cli env : ``` - `transformers` version: 4.36.2 - Platform: macOS-14.2.1-x86_64-i386-64bit - Python version: 3.11.7 - Huggingface_hub version: 0.20.1 - Safetensors version: 0.4.1 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.0.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ``` Thanks a lot! ### Who can help? @gante and @Rocketknight1 ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1 . Create an activate a virtual env using this poetry file : ``` [tool.poetry] name = "test" version = "1.0.0" authors = ["Marwen Taleb"] readme = "README.md" [tool.poetry.dependencies] python = ">=3.8,<3.12" transformers="4.36.2" scikit-learn = "^1.3.2" pandas = "2.0.0" torch = "2.0.0" [build-system] requires = ["poetry-core"] build-backend = "poetry.core.masonry.api" ``` 2 . Run this python script : ``` from transformers import AutoModel model = AutoModel.from_pretrained('jinaai/jina-embeddings-v2-base-en', trust_remote_code=True) ``` 3. You should received the above described error. ### Expected behavior I expect to be able to instantiate an AutoModel from a pretrained model when having Pytorch installed.
closed
completed
false
9
[]
[]
2023-12-29T17:24:50Z
2026-02-24T08:21:12Z
2024-02-11T08:03:47Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
Marwen94
36,446,303
MDQ6VXNlcjM2NDQ2MzAz
User
false
huggingface/transformers
2,143,620,996
I_kwDOCUB6oc5_xQ-E
29,127
https://github.com/huggingface/transformers/issues/29127
https://api.github.com/repos/huggingface/transformers/issues/29127
err_handle(layoutlmv3): Error message doesn't give much clarity when boxes not containing enough information
### System Info - `transformers` version: 4.37.2 - Platform: Windows-10-10.0.22000-SP0 - Python version: 3.11.5 - Huggingface_hub version: 0.20.3 - Safetensors version: 0.4.2 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.2.0+cpu (False) - Tensorflow version (GPU?): 2.15.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @younesbelkada @ArthurZucker ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction **** Model I am using LayoutLMv3: when `boxes = [[123, 53], [36, 87], ...]` (basically any list which is not according to the proper format) by proper format I mean `[[123, 346, 234, 634], [356, 568, 234, 25], ...]` ```python encoding = processor( image_1, text, boxes=boxes, max_length=512, padding="max_length", truncation=True, return_tensors="pt" ) ``` It produces a this error message ``` ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your features (labels in this case) have excessive nesting (inputs type list where type int is expected). ``` **To Reproduce** Steps to reproduce the behavior: 1. add any list of boxes with not enough values like `boxes = [[123, 53], [36, 87], ...]` 2. when run it throws the ValueError mentioned above ### Expected behavior Can throw an error saying ``` ValueError: boxes doesn't have enough values inside each box. Each box should contain 4 values ```
closed
completed
false
7
[ "Good Second Issue" ]
[]
2024-02-20T06:18:05Z
2026-02-27T14:42:58Z
2026-02-27T14:42:50Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
Sushaanth-Suresh-Kumar
123,300,765
U_kgDOB1lrnQ
User
false
huggingface/transformers
2,144,914,235
I_kwDOCUB6oc5_2Ms7
29,149
https://github.com/huggingface/transformers/issues/29149
https://api.github.com/repos/huggingface/transformers/issues/29149
Generate: support passing position_ids
Thank you @tengomucho, for uncovering this bug. ### The problem In a nutshell, passing the correct `position_ids` to `generate` should result in exactly the same results as not passing them. In other words, the following test should pass on all models, if added to `GenerationTesterMixin`. We can see that it is failing in general. ```py def test_passing_position_ids(self): # Check that passing position ids to generate yields the same results as not passing them, if the position ids # are correctly built. If the test fails, it means one of two things: # 1 - the manual position ids are not being piped correctly; OR # 2 - the automated position ids are not being correctly built. for model_class in self.all_generative_model_classes: config, input_ids, attention_mask, _ = self._get_input_ids_and_config(batch_size=1) if config.is_encoder_decoder: self.skipTest("This model does not support position_ids") # To truly test this property, let's create a batch where the second row corresponds to the test input with # left padding of 1. pad_token = torch.tensor([[config.pad_token_id or 0]], device=input_ids.device, dtype=input_ids.dtype) input_ids = torch.cat((input_ids, torch.cat((pad_token, input_ids[:, 1:]), dim=1)), dim=0) pad_mask = torch.zeros((1, 1), dtype=attention_mask.dtype, device=attention_mask.device) attention_mask = torch.cat((attention_mask, torch.cat((pad_mask, attention_mask[:, 1:]), dim=1)), dim=0) position_ids = torch.clamp(torch.cumsum(attention_mask, dim=-1) - 1, min=0) config.use_cache = True config.is_decoder = True model = model_class(config).to(torch_device).eval() try: output_position_ids = model.generate( input_ids, attention_mask=attention_mask, position_ids=position_ids, max_new_tokens=10 ) except ValueError as exc: if "The following `model_kwargs` are not used by the model: ['position_ids']" in str(exc): self.skipTest("This model does not support position_ids") else: raise output_no_position_ids = model.generate( input_ids, attention_mask=attention_mask, max_new_tokens=10 ) self.assertListEqual(output_no_position_ids.tolist(), output_position_ids.tolist()) ``` ### The fix There are two root causes for this: 1. `position_ids` is rejected in some models when it is passed (e.g. see [here](https://github.com/huggingface/transformers/blob/3c00b885b92fbcd0e7451e56ccf424a2d5a19bbb/src/transformers/models/gpt2/modeling_gpt2.py#L1022)). These models often assume no padding when `position_ids` is rejected. 2. `position_ids` is never updated, so it is only correct when created from scratch (=not passed). As such, a fix to this problem should consist in updating `position_ids` in `generate`, with `prepare_inputs_for_generation` only creating new `position_ids` when they don't exist. The test pasted above should be part of our tests after fixing the issue.
closed
completed
false
1
[ "WIP", "bug", "Generation" ]
[ "gante" ]
2024-02-20T17:34:00Z
2026-02-12T09:57:21Z
2026-02-12T09:57:21Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
gante
12,240,844
MDQ6VXNlcjEyMjQwODQ0
User
false
huggingface/transformers
2,178,032,113
I_kwDOCUB6oc6B0iHx
29,576
https://github.com/huggingface/transformers/issues/29576
https://api.github.com/repos/huggingface/transformers/issues/29576
error: casting `&T` to `&mut T` is undefined behavior, even if the reference is unused, consider instead using an `UnsafeCell` --> tokenizers-lib/src/models/bpe/trainer.rs:517:47
### System Info ``` 2024-03-11 01:14:30.782590: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered 2024-03-11 01:14:30.782649: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered 2024-03-11 01:14:30.784014: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered 2024-03-11 01:14:31.954016: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT WARNING:tensorflow:From /usr/local/lib/python3.10/dist-packages/transformers/commands/env.py:100: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.config.list_physical_devices('GPU')` instead. 2024-03-11 01:14:34.928846: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:47] Overriding orig_value setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0. CUDA backend failed to initialize: Found cuBLAS version 120103, but JAX was built against version 120205, which is newer. The copy of cuBLAS that is installed must be at least as new as the version against which JAX was built. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.) Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.38.2 - Platform: Linux-6.1.58+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.20.3 - Safetensors version: 0.4.2 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.1.0+cu121 (True) - Tensorflow version (GPU?): 2.15.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.8.1 (cpu) - Jax version: 0.4.23 - JaxLib version: 0.4.23 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction On Google Colab trying to install transformers 4.0.6 I installed rust 1.76.0 which got: error: casting `&T` to `&mut T` Then I tried rust 1.72.0 which was supposed to be less sensitive. ``` !pip install --upgrade transformers==4.06 --verbose warning: `#[macro_use]` only has an effect on `extern crate` and modules --> tokenizers-lib/src/utils/mod.rs:24:1 | 24 | #[macro_use] | ^^^^^^^^^^^^ | = note: `#[warn(unused_attributes)]` on by default warning: `#[macro_use]` only has an effect on `extern crate` and modules --> tokenizers-lib/src/utils/mod.rs:35:1 | 35 | #[macro_use] | ^^^^^^^^^^^^ warning: variable does not need to be mutable --> tokenizers-lib/src/models/unigram/model.rs:280:21 | 280 | let mut target_node = &mut best_path_ends_at[key_pos]; | ----^^^^^^^^^^^ | | | help: remove this `mut` | = note: `#[warn(unused_mut)]` on by default warning: variable does not need to be mutable --> tokenizers-lib/src/models/unigram/model.rs:297:21 | 297 | let mut target_node = &mut best_path_ends_at[starts_at + mblen]; | ----^^^^^^^^^^^ | | | help: remove this `mut` warning: variable does not need to be mutable --> tokenizers-lib/src/pre_tokenizers/byte_level.rs:175:59 | 175 | encoding.process_tokens_with_offsets_mut(|(i, (token, mut offsets))| { | ----^^^^^^^ | | | help: remove this `mut` warning: fields `bos_id` and `eos_id` are never read --> tokenizers-lib/src/models/unigram/lattice.rs:59:5 | 53 | pub struct Lattice<'a> { | ------- fields in this struct ... 59 | bos_id: usize, | ^^^^^^ 60 | eos_id: usize, | ^^^^^^ | = note: `Lattice` has a derived impl for the trait `Debug`, but this is intentionally ignored during dead code analysis = note: `#[warn(dead_code)]` on by default error: casting `&T` to `&mut T` is undefined behavior, even if the reference is unused, consider instead using an `UnsafeCell` --> tokenizers-lib/src/models/bpe/trainer.rs:517:47 | 513 | let w = &words[*i] as *const _ as *mut _; | -------------------------------- casting happend here ... 517 | let word: &mut Word = &mut (*w); | ^^^^^^^^^ | = note: for more information, visit <https://doc.rust-lang.org/book/ch15-05-interior-mutability.html> = note: `#[deny(invalid_reference_casting)]` on by default warning: `tokenizers` (lib) generated 6 warnings error: could not compile `tokenizers` (lib) due to 1 previous error; 6 warnings emitted Caused by: process didn't exit successfully: `/root/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/bin/rustc --crate-name tokenizers --edition=2018 tokenizers-lib/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C embed-bitcode=no --cfg 'feature="default"' --cfg 'feature="indicatif"' --cfg 'feature="progressbar"' -C metadata=b4902f315560f1ee -C extra-filename=-b4902f315560f1ee --out-dir /tmp/pip-install-rotcz5nj/tokenizers_f211137d6c704baa977bfe0569424256/target/release/deps -L dependency=/tmp/pip-install-rotcz5nj/tokenizers_f211137d6c704baa977bfe0569424256/target/release/deps --extern clap=/tmp/pip-install-rotcz5nj/tokenizers_f211137d6c704baa977bfe0569424256/target/release/deps/libclap-2ed1bc4e1f137d6a.rmeta --extern derive_builder=/tmp/pip-install-rotcz5nj/tokenizers_f211137d6c704baa977bfe0569424256/target/release/deps/libderive_builder-927868a0edb8a08b.so --extern esaxx_rs=/tmp/pip-install-rotcz5nj/tokenizers_f211137d6c704baa977bfe0569424256/target/release/deps/libesaxx_rs-00367ded6e9df21a.rmeta --extern indicatif=/tmp/pip-install-rotcz5nj/tokenizers_f211137d6c704baa977bfe0569424256/target/release/deps/libindicatif-893b81a84fee081a.rmeta --extern itertools=/tmp/pip-install-rotcz5nj/tokenizers_f211137d6c704baa977bfe0569424256/target/release/deps/libitertools-051f3c77bf3684bc.rmeta --extern lazy_static=/tmp/pip-install-rotcz5nj/tokenizers_f211137d6c704baa977bfe0569424256/target/release/deps/liblazy_static-df89fd9b4b197d62.rmeta --extern log=/tmp/pip-install-rotcz5nj/tokenizers_f211137d6c704baa977bfe0569424256/target/release/deps/liblog-db5663930c6645cc.rmeta --extern onig=/tmp/pip-install-rotcz5nj/tokenizers_f211137d6c704baa977bfe0569424256/target/release/deps/libonig-ab094d5df50c1ae3.rmeta --extern rand=/tmp/pip-install-rotcz5nj/tokenizers_f211137d6c704baa977bfe0569424256/target/release/deps/librand-57abfece9e5d7a1e.rmeta --extern rayon=/tmp/pip-install-rotcz5nj/tokenizers_f211137d6c704baa977bfe0569424256/target/release/deps/librayon-29cb179ffa5164fd.rmeta --extern rayon_cond=/tmp/pip-install-rotcz5nj/tokenizers_f211137d6c704baa977bfe0569424256/target/release/deps/librayon_cond-f3b239ca8b442c66.rmeta --extern regex=/tmp/pip-install-rotcz5nj/tokenizers_f211137d6c704baa977bfe0569424256/target/release/deps/libregex-48a23c12665b1ac6.rmeta --extern regex_syntax=/tmp/pip-install-rotcz5nj/tokenizers_f211137d6c704baa977bfe0569424256/target/release/deps/libregex_syntax-ace402a25abfd585.rmeta --extern serde=/tmp/pip-install-rotcz5nj/tokenizers_f211137d6c704baa977bfe0569424256/target/release/deps/libserde-00a50b461a53bfab.rmeta --extern serde_json=/tmp/pip-install-rotcz5nj/tokenizers_f211137d6c704baa977bfe0569424256/target/release/deps/libserde_json-fef87182d967f2a8.rmeta --extern spm_precompiled=/tmp/pip-install-rotcz5nj/tokenizers_f211137d6c704baa977bfe0569424256/target/release/deps/libspm_precompiled-af1cd270a9f7042e.rmeta --extern unicode_normalization_alignments=/tmp/pip-install-rotcz5nj/tokenizers_f211137d6c704baa977bfe0569424256/target/release/deps/libunicode_normalization_alignments-a1711ea2b5cfdc20.rmeta --extern unicode_segmentation=/tmp/pip-install-rotcz5nj/tokenizers_f211137d6c704baa977bfe0569424256/target/release/deps/libunicode_segmentation-0df53fbf44393ad7.rmeta --extern unicode_categories=/tmp/pip-install-rotcz5nj/tokenizers_f211137d6c704baa977bfe0569424256/target/release/deps/libunicode_categories-7c6fabd07afa2a56.rmeta -L native=/tmp/pip-install-rotcz5nj/tokenizers_f211137d6c704baa977bfe0569424256/target/release/build/esaxx-rs-17f45370f913980e/out -L native=/tmp/pip-install-rotcz5nj/tokenizers_f211137d6c704baa977bfe0569424256/target/release/build/onig_sys-1b013bbbe8847e4a/out` (exit status: 1) warning: build failed, waiting for other jobs to finish... error: `cargo rustc --lib --message-format=json-render-diagnostics --manifest-path Cargo.toml --release -v --features pyo3/extension-module --crate-type cdylib --` failed with code 101 error: subprocess-exited-with-error × Building wheel for tokenizers (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. full command: /usr/bin/python3 /usr/local/lib/python3.10/dist-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py build_wheel /tmp/tmp_wjb9r9d cwd: /tmp/pip-install-rotcz5nj/tokenizers_f211137d6c704baa977bfe0569424256 Building wheel for tokenizers (pyproject.toml) ... error ERROR: Failed building wheel for tokenizers Failed to build tokenizers ERROR: Could not build wheels for tokenizers, which is required to install pyproject.toml-based projects ``` I tried with different versions of rust. rust 1.72.0 is supposed to work. ``` !rustup toolchain install 1.72.0 !rustup default 1.72.0 !rustc --version ``` ### Expected behavior Install transformers v4.06 on Google Colab.
closed
completed
false
25
[]
[]
2024-03-11T01:23:29Z
2026-02-12T09:45:57Z
2024-05-22T12:35:18Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
dbl001
3,105,499
MDQ6VXNlcjMxMDU0OTk=
User
false
huggingface/transformers
2,199,099,680
I_kwDOCUB6oc6DE5kg
29,769
https://github.com/huggingface/transformers/issues/29769
https://api.github.com/repos/huggingface/transformers/issues/29769
Support batch_size > 1 in assisted decoding
### Feature request Support batch_size > 1 in assisted decoding ### Motivation With this support, we can provide more capability for assisted decoding, including beam search. ### Your contribution I would like to submit a PR to enable this; I mainly need to cut and pad the past_key_values because each sequence may have a different length in each generation round. Would like to hear your opinion @gante
closed
completed
false
3
[ "Feature request", "Generation" ]
[]
2024-03-21T04:37:45Z
2026-02-26T07:31:11Z
2024-04-01T08:22:48Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
jiqing-feng
107,918,818
U_kgDOBm614g
User
false
huggingface/transformers
2,211,192,891
I_kwDOCUB6oc6DzCA7
29,911
https://github.com/huggingface/transformers/issues/29911
https://api.github.com/repos/huggingface/transformers/issues/29911
Support DBRX Model
### Feature request Support the DBRX model (only correct pronunciation: DB-Rex) [blog post](https://www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm). Code is from the open source [databricks/dbrx](https://github.com/databricks/dbrx) repository. ### Motivation > Across a range of standard benchmarks, DBRX sets a new state-of-the-art for established open LLMs. Moreover, it provides the open community and enterprises building their own LLMs with capabilities that were previously limited to closed model APIs; according to our measurements, it surpasses GPT-3.5, and it is competitive with Gemini 1.0 Pro. It is an especially capable code model, surpassing specialized models like CodeLLaMA-70B on programming, in addition to its strength as a general-purpose LLM. ### Your contribution https://github.com/huggingface/transformers/pull/29910
open
null
false
9
[ "New model" ]
[]
2024-03-27T16:03:01Z
2026-02-27T15:40:50Z
null
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
milocress
19,612,401
MDQ6VXNlcjE5NjEyNDAx
User
false
huggingface/transformers
570,865,148
MDU6SXNzdWU1NzA4NjUxNDg=
3,021
https://github.com/huggingface/transformers/issues/3021
https://api.github.com/repos/huggingface/transformers/issues/3021
Can GPT2LMHeadModel do batch inference with variable sentence lengths?
Given GPT2 tokenizer do not have an internal pad_token_id, how do I pad sentences and do batch inference using GPT2LMHeadModel? Specifically my code as: ``` prompt_text = [ 'in this paper we', 'we are trying to', 'The purpose of this workshop is to check whether we can', ] tokens = [tokenizer.convert_tokens_to_ids(tokenizer.tokenize(x, add_prefix_space=True)) for x in prompt_text] inputs = pad_sequence([torch.LongTensor(x) for x in tokens], batch_first = True, padding_value=tokenizer.eos_token_id) outputs, past = model(input_ids=inputs, attention_mask=None) ``` This will return non-relevant predictions since GPT2 will consider the eos_tokens and start a new sentence in the batch. Can anyone please share sample codes that using GPT2LMHeadModel to do batch inference with various sentence lengths? Thanks!
closed
completed
false
58
[]
[ "patrickvonplaten" ]
2020-02-25T22:05:02Z
2026-03-19T07:15:26Z
2020-02-26T13:11:23Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
schizism
3,358,940
MDQ6VXNlcjMzNTg5NDA=
User
false
huggingface/transformers
2,246,614,757
I_kwDOCUB6oc6F6J7l
30,277
https://github.com/huggingface/transformers/issues/30277
https://api.github.com/repos/huggingface/transformers/issues/30277
Jamba-v01 Model + Deepspeed Zero3 lead to "RuntimeError: Detected mismatch between collectives on ranks."
### System Info - `transformers` version: 4.39.0 - Platform: Linux-5.15.0-78-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.19.4 - Safetensors version: 0.4.1 - Accelerate version: 0.29.2 - Accelerate config: not found - Deepspeed version: 0.14.1 - PyTorch version (GPU?): 2.1.0a0+32f93b1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> deepspeed config: ```json { "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "offload_param": { "device": "cpu", "pin_memory": true }, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 0, "reduce_bucket_size": 1.677722e+07, "stage3_prefetch_bucket_size": 1.509949e+07, "stage3_param_persistence_threshold": 4.096000e+04, "stage3_max_live_parameters": 1.000000e+09, "stage3_max_reuse_distance": 1.000000e+09, "stage3_gather_16bit_weights_on_model_save": true }, "bf16": { "enabled": true, "auto_cast": false, "loss_scale": 0, "initial_scale_power": 32, "loss_scale_window": 1000, "hysteresis": 2, "min_loss_scale": 1 }, "train_batch_size": 256, "gradient_accumulation_steps": 8, "train_micro_batch_size_per_gpu": 2, "wall_clock_breakdown": false, "steps_per_print": inf, "fp16": { "enabled": false }, "zero_allow_untested_optimizer": true } ``` training with data-parallel and deepspeed zeor3 off-loading. ### Who can help? @pacman100 ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction SFT training using Jamba-v0.1 model with Accerate Trainer and Deepspeed Zero3 offload. ### Expected behavior After a few iterations, there's an error met. Error Message: ```shell File "/share5/users/kqsong/code/FastChat/fastchat/train/train_better_preprocessing.py", line 306, in <module> train() File "/share5/users/kqsong/code/FastChat/fastchat/train/train_better_preprocessing.py", line 300, in train trainer.train() File "/root/.local/lib/python3.10/site-packages/transformers/trainer.py", line 1780, in train return inner_training_loop( File "/root/.local/lib/python3.10/site-packages/transformers/trainer.py", line 2118, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "/root/.local/lib/python3.10/site-packages/transformers/trainer.py", line 3036, in training_step loss = self.compute_loss(model, inputs) File "/root/.local/lib/python3.10/site-packages/transformers/trainer.py", line 3059, in compute_loss outputs = model(**inputs) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/root/.local/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn ret_val = func(*args, **kwargs) File "/root/.local/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1855, in forward loss = self.module(*inputs, **kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1568, in _call_impl result = forward_call(*args, **kwargs) File "/root/.cache/huggingface/modules/transformers_modules/modeling_jamba.py", line 1849, in forward outputs = self.model( File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1568, in _call_impl result = forward_call(*args, **kwargs) File "/root/.cache/huggingface/modules/transformers_modules/modeling_jamba.py", line 1715, in forward layer_outputs = self._gradient_checkpointing_func( File "/usr/local/lib/python3.10/dist-packages/torch/_compile.py", line 24, in inner return torch._dynamo.disable(fn, recursive)(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/eval_frame.py", line 333, in _fn return fn(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/external_utils.py", line 17, in inner return fn(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/utils/checkpoint.py", line 450, in checkpoint return CheckpointFunction.apply(function, preserve, *args) File "/usr/local/lib/python3.10/dist-packages/torch/autograd/function.py", line 539, in apply return super().apply(*args, **kwargs) # type: ignore[misc] File "/usr/local/lib/python3.10/dist-packages/torch/utils/checkpoint.py", line 230, in forward outputs = run_function(*args) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1568, in _call_impl result = forward_call(*args, **kwargs) File "/root/.cache/huggingface/modules/transformers_modules/modeling_jamba.py", line 1361, in forward hidden_states, router_logits = self.moe(hidden_states) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1568, in _call_impl result = forward_call(*args, **kwargs) File "/root/.cache/huggingface/modules/transformers_modules/modeling_jamba.py", line 1211, in forward current_hidden_states = expert_layer(current_state) * routing_weights[top_x_list, idx_list, None] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1557, in _call_impl args_result = hook(self, args) File "/root/.local/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn ret_val = func(*args, **kwargs) File "/root/.local/lib/python3.10/site-packages/deepspeed/runtime/zero/parameter_offload.py", line 278, in _pre_forward_module_hook self.pre_sub_module_forward_function(module) File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/root/.local/lib/python3.10/site-packages/deepspeed/runtime/zero/parameter_offload.py", line 452, in pre_sub_module_forward_function param_coordinator.fetch_sub_module(sub_module, forward=True) File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/eval_frame.py", line 333, in _fn return fn(*args, **kwargs) File "/root/.local/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn ret_val = func(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/root/.local/lib/python3.10/site-packages/deepspeed/runtime/zero/partitioned_param_coordinator.py", line 385, in fetch_sub_module self.__all_gather_params(params_to_prefetch, forward) File "/root/.local/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn ret_val = func(*args, **kwargs) File "/root/.local/lib/python3.10/site-packages/deepspeed/runtime/zero/partitioned_param_coordinator.py", line 434, in __all_gather_params self.__all_gather_params_(nonquantized_params, forward, quantize=self.zero_quantized_weights) File "/root/.local/lib/python3.10/site-packages/deepspeed/runtime/zero/partitioned_param_coordinator.py", line 463, in __all_gather_params_ handle = param_group[0].all_gather_coalesced(param_group, quantize=quantize) File "/root/.local/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn ret_val = func(*args, **kwargs) File "/root/.local/lib/python3.10/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 1259, in all_gather_coalesced handles.append(_all_gather_dtype(dtype, params, world_size, rank_in_group, ds_process_group)) File "/root/.local/lib/python3.10/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 1147, in _all_gather_dtype handle = _dist_allgather_fn(partitions[rank_in_group], flat_tensor, ds_process_group) File "/root/.local/lib/python3.10/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 95, in _dist_allgather_fn return instrument_w_nvtx(dist.allgather_fn)(output_tensor, input_tensor, group=group, async_op=True) File "/root/.local/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn ret_val = func(*args, **kwargs) File "/root/.local/lib/python3.10/site-packages/deepspeed/comm/comm.py", line 320, in allgather_fn return all_gather_into_tensor(output_tensor, input_tensor, group=group, async_op=async_op, debug=debug) File "/root/.local/lib/python3.10/site-packages/deepspeed/comm/comm.py", line 117, in log_wrapper return func(*args, **kwargs) File "/root/.local/lib/python3.10/site-packages/deepspeed/comm/comm.py", line 305, in all_gather_into_tensor return cdb.all_gather_into_tensor(output_tensor=output_tensor, input_tensor=tensor, group=group, async_op=async_op) File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/eval_frame.py", line 333, in _fn return fn(*args, **kwargs) File "/root/.local/lib/python3.10/site-packages/deepspeed/comm/torch.py", line 199, in all_gather_into_tensor return self.all_gather_function(output_tensor=output_tensor, File "/usr/local/lib/python3.10/dist-packages/torch/distributed/c10d_logger.py", line 47, in wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/distributed/distributed_c10d.py", line 2886, in all_gather_into_tensor work = group._allgather_base(output_tensor, input_tensor) RuntimeError: Detected mismatch between collectives on ranks. Rank 0 is running collective: CollectiveFingerPrint(SequenceNumber=1843749, OpType=_ALLGATHER_BASE, TensorShape=[2228224], TensorDtypes=BFloat16, TensorDeviceTypes=TensorOptions(dtype=float (def ault), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt))), but Rank 10 is running collective: CollectiveFingerPrint(SequenceNumber=1843749, OpType=_ALLGATHER_BASE, TensorShape=[131 072], TensorDtypes=BFloat16, TensorDeviceTypes=TensorOptions(dtype=float (default), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt))).Collectives differ in the following aspects: Tensor Tensor shapes: 2228224vs 131072 ```
closed
completed
false
9
[ "DeepSpeed", "bug" ]
[]
2024-04-16T18:04:07Z
2026-02-21T17:10:24Z
2024-11-23T08:13:49Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
KaiQiangSong
9,112,038
MDQ6VXNlcjkxMTIwMzg=
User
false
huggingface/transformers
2,258,877,823
I_kwDOCUB6oc6Go71_
30,430
https://github.com/huggingface/transformers/issues/30430
https://api.github.com/repos/huggingface/transformers/issues/30430
Remove `mps` workaround for `isin()`
### Feature request Remove `mps` workaround for `isin()` ### Motivation #30376 introduced a workaround for `isin()` on `mps` devices, because PyTorch does not support that op yet: https://github.com/pytorch/pytorch/issues/77764#issuecomment-2067838075. Going forward, it'd be desirable to use the much more readable `isin()` version. This issue is meant to track PyTorch support of `isin()` on `mps` so we can remove the workaround and simplify the code. ### Your contribution I can submit a PR when the op is eventually supported.
closed
completed
false
10
[ "Should Fix", "WIP" ]
[]
2024-04-23T13:23:10Z
2026-02-24T12:56:26Z
2026-02-24T11:11:32Z
MEMBER
null
20260325T173244Z
2026-03-25T17:32:44Z
pcuenca
1,177,582
MDQ6VXNlcjExNzc1ODI=
User
false
huggingface/transformers
2,267,456,217
I_kwDOCUB6oc6HJqLZ
30,525
https://github.com/huggingface/transformers/issues/30525
https://api.github.com/repos/huggingface/transformers/issues/30525
Support align_corners=True in image_transforms module
### Feature request For a new model I'm working on #30136 I'd need to resize images in the image processor using `align_corners=True`, as the original code uses `torch.nn.functional(..., align_corners=True)` for resizing images during pre-processing. ### Motivation Would be great to have this option available so that we can remove the torch dependency from the image processor ### Your contribution Not sure I can look into this, but @molbap showed interest in looking into this
closed
completed
false
4
[ "Feature request", "Vision" ]
[]
2024-04-28T09:36:32Z
2026-03-18T09:49:12Z
2026-03-18T09:49:12Z
MEMBER
null
20260325T173244Z
2026-03-25T17:32:44Z
NielsRogge
48,327,001
MDQ6VXNlcjQ4MzI3MDAx
User
false
huggingface/transformers
2,272,057,528
I_kwDOCUB6oc6HbNi4
30,579
https://github.com/huggingface/transformers/issues/30579
https://api.github.com/repos/huggingface/transformers/issues/30579
Community contribution: enable dynamic resolution input for more vision models.
### Feature request Some of our models interpolate its positional embeddings, enabling pretrained checkpoints to be used on different input resolutions. For example, [here in ViT](https://github.com/huggingface/transformers/blob/75bbfd5b2237b7e35a9265731ecf63022579e7e2/src/transformers/models/vit/modeling_vit.py#L79). - [x] [beit](https://github.com/huggingface/transformers/blob/main/src/transformers/models/beit/modeling_beit.py) & [data2vec](https://github.com/huggingface/transformers/blob/main/src/transformers/models/data2vec/modeling_data2vec_vision.py) @OmarManzoor #31053 - [x] [blip](https://github.com/huggingface/transformers/blob/main/src/transformers/models/blip/modeling_blip.py), [blip_2](https://github.com/huggingface/transformers/blob/main/src/transformers/models/blip_2/modeling_blip_2.py) @zafstojano #30722 - [x] [clip](https://github.com/huggingface/transformers/blob/main/src/transformers/models/clip/modeling_clip.py) and clip related models: [altclip](https://github.com/huggingface/transformers/blob/main/src/transformers/models/altclip/modeling_altclip.py), [bridgetower](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bridgetower/modeling_bridgetower.py), [chinese_clip](https://github.com/huggingface/transformers/blob/main/src/transformers/models/chinese_clip/modeling_chinese_clip.py), [git](https://github.com/huggingface/transformers/blob/main/src/transformers/models/git/modeling_git.py), [kosmos2](https://github.com/huggingface/transformers/blob/main/src/transformers/models/kosmos2/modeling_kosmos2.py) ~#30783~ #32600 - [x] [deit](https://github.com/huggingface/transformers/blob/main/src/transformers/models/deit/modeling_deit.py) #31131 - [ ] [owlvit](https://github.com/huggingface/transformers/blob/main/src/transformers/models/owlvit/modeling_owlvit.py), [owlv2](https://github.com/huggingface/transformers/blob/main/src/transformers/models/owlv2/modeling_owlv2.py) @yMayanand https://github.com/huggingface/transformers/pull/34764 - [x] [perceiver](https://github.com/huggingface/transformers/blob/main/src/transformers/models/perceiver/modeling_perceiver.py) @g1y5x3 #30979 - [x] [siglip](https://github.com/huggingface/transformers/blob/main/src/transformers/models/siglip/modeling_siglip.py) @davidgxue https://github.com/huggingface/transformers/pull/30719 - [x] [swin](https://github.com/huggingface/transformers/blob/main/src/transformers/models/swin/modeling_swin.py), [donut](https://github.com/huggingface/transformers/blob/main/src/transformers/models/donut/modeling_donut_swin.py), [maskformer swin](https://github.com/huggingface/transformers/blob/main/src/transformers/models/maskformer/modeling_maskformer_swin.py), [swinv2](https://github.com/huggingface/transformers/blob/main/src/transformers/models/swinv2/modeling_swinv2.py) #30656 @the-neural-networker - [x] [tvp](https://github.com/huggingface/transformers/blob/main/src/transformers/models/tvp/modeling_tvp.py) @bhuvanmdev #30863 - [x] [vit_mae](https://github.com/huggingface/transformers/blob/main/src/transformers/models/vit_mae/modeling_vit_mae.py) #30657 #30732 @bhuvanmdev - [x] [vivit](https://github.com/huggingface/transformers/blob/main/src/transformers/models/vivit/modeling_vivit.py) #30630 @jla524 ### Motivation Let's add this to more models, to leverage existing checkpoints for new cases! ### Your contribution For anyone who would like to contribute, please comment on the issue, claiming a model you'd like to work on and share a link to the PR. Each PR should: * Add an `interpolate_pos_encoding` method * Add a test showing the model can correctly interpolate an input image of a different size There was a PR opened to add this to CLIP models, which is now inactive, but useful for reference of the changes to make: https://github.com/huggingface/transformers/pull/27457 Once the PR is ready, you can ping me for review 🤗
closed
completed
false
40
[ "Good First Issue", "Vision" ]
[]
2024-04-30T17:00:10Z
2026-02-10T12:47:55Z
2026-02-10T12:47:55Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
amyeroberts
22,614,925
MDQ6VXNlcjIyNjE0OTI1
User
false
huggingface/transformers
2,287,419,716
I_kwDOCUB6oc6IV0FE
30,725
https://github.com/huggingface/transformers/issues/30725
https://api.github.com/repos/huggingface/transformers/issues/30725
Support for Multiple Datasets and Domain-Specific Loss Calculation in Trainer
### Feature request I am currently working on a project that involves sequence level distillation across multiple domains, requiring the handling of separate datasets for each domain within a single training loop. Specifically, the challenge involves integrating data from four distinct domains, computing loss individually per domain, and then aggregating these losses into a global loss measure that can guide the overall training process. ### Motivation Ideally, the Trainer class would natively support the following features: Multiple Dataset Handling: Ability to pass multiple datasets into the Trainer directly, with each dataset potentially representing a different domain. Domain-Specific Loss Calculation: Support for defining and computing loss separately for each domain's dataset within the training loop and then integrating these losses into a global training objective. ### Your contribution Currently, the Trainer class in the Transformers library supports passing a single dataset for training and evaluation. To handle multiple datasets or to calculate domain-specific losses, one must subclass the Trainer and override methods such as compute_loss, which complicates the implementation and integration of domain-specific training strategies.
open
null
false
19
[ "trainer", "Feature request" ]
[]
2024-05-09T10:45:25Z
2026-02-18T19:39:36Z
null
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
ghost
10,137
MDQ6VXNlcjEwMTM3
User
false
huggingface/transformers
2,313,264,506
I_kwDOCUB6oc6J4Z16
30,990
https://github.com/huggingface/transformers/issues/30990
https://api.github.com/repos/huggingface/transformers/issues/30990
Sentence Transformers Gets Stuck loading
### System Info Ubuntu 20.04 Python 3.8.10 Updating Nvidia Driver is not possible, have to do with Cuda 11.6 (Torch 1.13.0) torch 1.13.0 transformers 4.38.1 nvidia-cublas-cu11 11.10.3.66 nvidia-cuda-nvrtc-cu11 11.7.99 nvidia-cuda-runtime-cu11 11.7.99 nvidia-cudnn-cu11 8.5.0.96 sentence-transformers 2.7.0 ### Who can help? Sentencetransformer sometimes gets stuck loading forever (runs on server). Only after rebooting the server it becomes normal again for a while. Model: https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2 ``` from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2') embeddings = model.encode(sentences) print(embeddings) ``` Updating Nvidia Driver is not possible, have to do with Cuda 11.6 (Torch 1.13.0) ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Not sure. Use system specifications and run this multiple times?. from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer("/data/MiniLM') embeddings = model.encode(sentences) print(embeddings) ### Expected behavior Doesn't get stuck, or gives error.
closed
completed
false
8
[]
[]
2024-05-23T15:49:26Z
2026-03-11T19:02:03Z
2024-07-28T08:04:54Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
Jaswir
15,957,528
MDQ6VXNlcjE1OTU3NTI4
User
false
huggingface/transformers
2,359,684,102
I_kwDOCUB6oc6MpewG
31,474
https://github.com/huggingface/transformers/issues/31474
https://api.github.com/repos/huggingface/transformers/issues/31474
Quantization support for heads and embeddings
### Feature request Hi! I’ve been researching LLM quantization recently ([this paper](https://arxiv.org/abs/2405.14852)), and noticed a potentially improtant issue that arises when using LLMs with 1-2 bit quantization. ### Problem description :mag: Transformers supports several great ways for quantizing transformer ‘body’, but it seems that there is no built-in way to quantize embeddings and/or lm head. The reason why this is important is that some of the recent LLMs have very large vocabularies, and as a result, their embeddings and heads can get massive. For instance, [Llama 3](https://huggingface.co/meta-llama/Meta-Llama-3-8B) has 128K token vocabulary, [Qwen 2](https://huggingface.co/Qwen/Qwen2-72B-Instruct) has over 150K, [Gemma 2b](https://huggingface.co/google/gemma-2b) has 256K As a result, if you load NF4 or AQLM quantized models, their **embeddings can take up 50% or more of the model footprint**. This is even more critical for lower bitwidth quantization: ![https://galqiwi.ru/persistent/2024-06-18/embed-1.png](https://galqiwi.ru/persistent/2024-06-18/embed-1.png) ### Feature Request :rocket: It would be great if transformers had a flag to quantize embeddings and heads using some of the existing quantization methods. One simple way would be to use LLM.int8 or NF4 by Tim Dettmers since transformers already supports this. I’ve investigated how quantizing embeddings with these methods affects common models. Below is model perplexity for [Llama 3 8B using AQLM+PV 2-bit quantization](https://huggingface.co/ISTA-DASLab/Meta-Llama-3-8B-AQLM-PV-2Bit-1x16). I measured three configurations: fp16 embeddings, int8 embeddings and NF4 embeddings with the same parameters that transformers uses for linear layers. ![https://galqiwi.ru/persistent/2024-06-18/emb_v3.png](https://galqiwi.ru/persistent/2024-06-18/emb_v3.png) ![https://galqiwi.ru/persistent/2024-06-18/head_v3.png](https://galqiwi.ru/persistent/2024-06-18/head_v3.png) The values represent perplexity on [WikiText-2 test set](https://huggingface.co/datasets/Salesforce/wikitext/viewer/wikitext-2-v1) measured with the same protocol used in [GPTQ](https://arxiv.org/abs/2210.17323) / [AQLM](https://arxiv.org/abs/2401.06118) / [QuIP#](https://arxiv.org/pdf/2402.04396.pdf) papers. The code for these measurements can be found [here](https://gist.github.com/galqiwi/cb896f39052d1f4f718cb772040f3088). Overall, 8-bit compression looks nearly lossless, the increase in perplexity does not exceed the error you get when quantizing the transformer with the same LLM int8 codec. In turn, NF4 introduces some error (within 0.05 for Llama 3), but I would argue that this trade-off makes sense for low memory applications. Also, embeddings appear easier to quantize than heads. ### Implementation details :gear: There are multiple obstacles on the way to implementing this feature: #### No support for mixed quantization Currently, transformers does not support quantizing with multiple `HfQuantizer`s. IMO this is a good behaviour, as interactions between different quantizators can be messy. The problem is that this feature requires for transformers library to use different compression methods for body and heads/embeddings. I think that can be solved by extending `HfQuantizer` interface by adding embedding/head quantization methods and adding new `[embed,head]_quantization_config` arguments to `QuantizationConfigMixin` or something in this area. #### No support for embedding quantization in bitsandbytes As far as I know, no quantization method supports `nn.Embedding`-like interface. I can ask bitsandbytes maintainers if they would accept a PR that fixes that. Also, there is a caveat that some models use tied embeddings/heads, while implementing, one need to be mindful of them. ### Cool things that this can enable :trophy: If we can implement 4-bit embeddings, it will be possible to write a colab notebook that runs [Llama 3 70B model](https://huggingface.co/meta-llama/Meta-Llama-3-70B) on the free tier T4 GPU without offoading, by combining embedding/heads quantization and the PV-tuned model https://huggingface.co/ISTA-DASLab/Meta-Llama-3-70B-AQLM-PV-1Bit-1x16 . Another use case is running quantized LLMs on smartphones or embedded devices: for instance, the [gemma-2b](https://huggingface.co/google/gemma-2b) can fit into 1GB RAM, but only if you quantize embeddings/heads in addition to transformer weights. If you’re interested in making a demo out of this, I’d be excited to implement this with your review / recommendations if you prefer, or wait for you to implement it your way. What do you think? ### Motivation We are faced with a new bottleneck in model quantization. I think we can manage to fix it ### Your contribution I can allocate my time to submitting PR, but we need to figure out what to do first
closed
completed
false
15
[ "Feature request", "Quantization" ]
[]
2024-06-18T11:56:34Z
2026-02-18T14:33:03Z
2026-02-18T14:21:10Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
galqiwi
17,232,054
MDQ6VXNlcjE3MjMyMDU0
User
false
huggingface/transformers
2,363,874,975
I_kwDOCUB6oc6M5d6f
31,515
https://github.com/huggingface/transformers/issues/31515
https://api.github.com/repos/huggingface/transformers/issues/31515
from_pretrained 加载checkpoint过慢的问题
### System Info latest python3.9.8 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 我用ollama加载模型明显比上述方法快,有没有办法可以提高加载速度 ### Expected behavior 1
closed
completed
false
3
[]
[]
2024-06-20T08:41:06Z
2026-03-20T03:44:30Z
2024-07-29T08:04:21Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
zhaoyuchen1128
167,266,669
U_kgDOCfhJbQ
User
false
huggingface/transformers
2,391,073,573
I_kwDOCUB6oc6OhOMl
31,795
https://github.com/huggingface/transformers/issues/31795
https://api.github.com/repos/huggingface/transformers/issues/31795
Confusing documentation of input_ids and past_key_values in model.forward
### System Info Current documentation ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The docs (e.g. for mistral forward method) state that : > If past_key_values is used, optionally only the last decoder_input_ids have to be input (see past_key_values). > If past_key_values are used, the user can optionally input only the last input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all input_ids of shape (batch_size, sequence_length). https://huggingface.co/docs/transformers/main/model_doc/mistral#transformers.MistralModel.forward ### Expected behavior It is my understanding that it is in fact not **optional** but **obligatory** to pass only the last input ids (those that don’t have their past key value states given to this model), as there is no handling of the case where full input ids are passed. C.f. https://discuss.huggingface.co/t/correct-input-ids-when-passing-past-key-values/92044
closed
completed
false
6
[ "WIP" ]
[ "zucchini-nlp" ]
2024-07-04T15:08:39Z
2026-02-19T11:43:05Z
2026-02-19T11:43:05Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
alex-hh
5,719,745
MDQ6VXNlcjU3MTk3NDU=
User
false
huggingface/transformers
2,418,835,728
I_kwDOCUB6oc6QLIEQ
32,090
https://github.com/huggingface/transformers/issues/32090
https://api.github.com/repos/huggingface/transformers/issues/32090
[Error] with Trainer: TypeError: Unsupported types (<class 'NoneType'>) passed to `_gpu_broadcast_one`.
### System Info - `transformers` version: 4.42.4 - Platform: Linux-5.15.0-101-generic-x86_64-with-glibc2.35 - Python version: 3.10.13 - Huggingface_hub version: 0.24.0 - Safetensors version: 0.4.2 - Accelerate version: 0.32.0 - Accelerate config: not found - PyTorch version (GPU?): 2.3.1+cu121 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @muellerzr @SunMarc @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction https://gist.github.com/halixness/eadd6d1d89ae48597f70cb09f2b44139 ### Expected behavior Hello, I have written a simple training script to train from scratch a gpt2-like model with a large dataset of strings (molecules in SMILES format). After around ~2k steps (`batch_size=128`, `#samples = ~1.5M`), I encounter the following error: ``` TypeError: Unsupported types (<class 'NoneType'>) passed to `_gpu_broadcast_one`. Only nested list/tuple/dicts of objects that are valid for `is_torch_tensor` should be passed. ``` I tried already: - to use the `default_data_collator` instead and to manually group samples as in [in the official example](https://github.com/huggingface/transformers/blob/89575b567e061fd87bdd655ba188b6c7a922d54a/examples/pytorch/language-modeling/run_clm.py#L513). - to check manually for the value of the batch that makes the script crash apparently: no NaN values, it all seems to make sense. - to check whether the dataset initially contains any empty or None strings, which is not the case. I'm not sure about what could case this error. Any suggestion is much appreciated!
closed
completed
false
3
[ "trainer", "bug" ]
[]
2024-07-19T13:01:37Z
2026-03-20T06:22:10Z
2024-09-22T08:06:59Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
halixness
20,798,848
MDQ6VXNlcjIwNzk4ODQ4
User
false
huggingface/transformers
2,480,528,547
I_kwDOCUB6oc6T2dyj
32,937
https://github.com/huggingface/transformers/issues/32937
https://api.github.com/repos/huggingface/transformers/issues/32937
Some casualLM models don't get position_ids in their forward pass.
### Feature request There are some models such that their forward pass doesn't get position_ids. e.g. we can see that OPTModel doesn't get position_ids, while GPTJModel does get position_ids. most newer models do have position_ids. ### Motivation There are two main reasons we would like for all LM models to get positions ids. 1. to have the API be consistent with all models. 2. position_ids are very important if you want to use flash-attention without padding, during training. if i want to be able to pack two or more sentences in the same sequence. I would like to know that the model handles the sentences accordingly and treats each sentence as it's own different sentence. flash-attention code uses position_ids to check if some sequences are packed and runs an appropriate function to make sure there is no cross example contamination. but without this the model can't use this feature. the code always checks if position_ids is not None: https://github.com/huggingface/transformers/blob/v4.44.1/src/transformers/modeling_flash_attention_utils.py#L270 ### Your contribution I may be able to fix this and help with a PR. but would love a more experienced person to guide me.
closed
completed
false
6
[ "Good Second Issue", "Feature request" ]
[]
2024-08-22T11:18:43Z
2026-03-18T13:04:19Z
2026-03-18T13:04:19Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
avishaiElmakies
36,810,152
MDQ6VXNlcjM2ODEwMTUy
User
false
huggingface/transformers
2,481,187,393
I_kwDOCUB6oc6T4-pB
32,944
https://github.com/huggingface/transformers/issues/32944
https://api.github.com/repos/huggingface/transformers/issues/32944
clarify the label shifting behavior of llama models when `labels` is given.
### Feature request i believe `labels` in the training of causal LMs means the value to predict at time `n`, i.e., the next token. in other words, i'd assume, if `labels` is given, it should be already shifted by one in the data loader w.r.t. the `input_ids`. however, in `LlamaForCausalLM.forward()`, i found the labels are always shifted, silently. https://github.com/huggingface/transformers/blob/f1d822ba337499d429f832855622b97d90ac1406/src/transformers/models/llama/modeling_llama.py#L1205-L1210 ```python Args: labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the masked language modeling loss. Indices should either be in `[0, ..., config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`. ``` ... ```python if labels is not None: # Shift so that tokens < n predict n shift_logits = logits[..., :-1, :].contiguous() shift_labels = labels[..., 1:].contiguous() # Flatten the tokens loss_fct = CrossEntropyLoss() shift_logits = shift_logits.view(-1, self.config.vocab_size) shift_labels = shift_labels.view(-1) # Enable model parallelism shift_labels = shift_labels.to(shift_logits.device) loss = loss_fct(shift_logits, shift_labels) ``` i found it quite unexpected hence calling it "silently". as this is for a causal LM, shouldn't it be not shifting the labels by default? in modeling GPT2, this is at least documented explicitly. https://github.com/huggingface/transformers/blob/f1d822ba337499d429f832855622b97d90ac1406/src/transformers/models/gpt2/modeling_gpt2.py#L1309-1314 in gemma2, it has the same behavior and no explicit mentioning in the docstring. https://github.com/huggingface/transformers/blob/f1d822ba337499d429f832855622b97d90ac1406/src/transformers/models/gemma2/modeling_gemma2.py#L978-L982 i think at least we should force the docstring to mention this, if making a change is too dangerous at this point. ### Motivation i didn't expect this behavior and used my data loader, which does the shifting already, as i believe that is what `labels` should mean. as a result, i ended up finetuning a model to predict the next next token, which outputted gibberish. ### Your contribution - hopefully leaving this issue helps communication across users - i can make a one line change in the docstring. - not sure how exactly, but if this potential misunderstanding could be checked, it'd be great. technically, we can check if the labels are already shifted. though i don't know where is the best place for this.
closed
completed
false
10
[ "Feature request" ]
[]
2024-08-22T15:54:02Z
2026-03-13T13:32:44Z
2026-03-13T13:32:44Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
keunwoochoi
16,153,797
MDQ6VXNlcjE2MTUzNzk3
User
false
huggingface/transformers
2,504,129,656
I_kwDOCUB6oc6VQfx4
33,290
https://github.com/huggingface/transformers/issues/33290
https://api.github.com/repos/huggingface/transformers/issues/33290
oom when using adafactor optimizer in deepspeed
### System Info ```python - `transformers` version: 4.44.2 - Platform: Linux-5.15.0-105-generic-x86_64-with-glibc2.31 - Python version: 3.10.0 - Huggingface_hub version: 0.23.4 - Safetensors version: 0.4.2 - Accelerate version: 0.33.0 - Accelerate config: not found - PyTorch version (GPU?): 2.3.0+cu118 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: <fill in> - Using GPU in script?: <fill in> - GPU type: NVIDIA A800 80GB PCIe ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction i'm running train_xl.sh in [this repo](https://github.com/yisol/IDM-VTON). and i change the 8bit adam optimizer to adafactor optimizer using transformers.optimization.Adafactor. i'm using two 40GB a100, deepspeed stage 2, batchsize=1,VTON-HD dataset. the adafactor optimizer should use less gpu memory, because of less optimizer states than 8bit adam, but it get oom in [this line](https://github.com/huggingface/transformers/blob/ecd61c62862f925a18b4f063dc17fcaf01826e25/src/transformers/optimization.py#L877) and oom happens after 10 steps, i don't know what happen in 10th step, i call the ```accelerate.backward()``` and``` optimizer.step()``` every step. and in 10th step, the memory usage increased from 29GB to 39GB when using 8bit adam optimizer, and get oom when using adafactor optimizer ### Expected behavior could anybody explain this phenomenon
closed
completed
false
10
[ "Usage", "Good First Issue", "bug" ]
[]
2024-09-04T01:56:08Z
2026-03-02T15:37:38Z
2026-03-02T15:37:38Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
zhangvia
38,352,569
MDQ6VXNlcjM4MzUyNTY5
User
false
huggingface/transformers
2,510,650,907
I_kwDOCUB6oc6VpX4b
33,357
https://github.com/huggingface/transformers/issues/33357
https://api.github.com/repos/huggingface/transformers/issues/33357
bus error on version 4.43.0 with pretrained community CLIP model - MacOS
### System Info - `transformers` version: 4.43.0 - Platform: macOS-13.0-arm64-arm-64bit - Python version: 3.10.9 - Huggingface_hub version: 0.24.6 - Safetensors version: 0.4.5 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.4.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` from transformers import CLIPModel, CLIPTokenizerFast tokenizer = CLIPTokenizerFast.from_pretrained("patrickjohncyh/fashion-clip") model = CLIPModel.from_pretrained("patrickjohncyh/fashion-clip") tokenized = tokenizer(["hello"], return_tensors="pt", padding=True) print("tokenized", tokenized) # bus error occurs here embed = model.get_text_features(**tokenized).detach().cpu().numpy() print("embedded", tokenized) ``` gives : ``` tokenized {'input_ids': tensor([[49406, 3497, 49407]]), 'attention_mask': tensor([[1, 1, 1]])} zsh: bus error python test_hf.py ``` I don't think the issue has been posted already. After bisecting versions, it looks like `4.42.4` does not have the issue and `4.43.0` has the issue I have little insight to provide except the `bus error`, and that this does not occur with the `clip-vit-base-patch32` model. I saw some breaking changes in this version release, but only about the tokenizer. I did not have time to test on a linux distribution yet Thanks ! ### Expected behavior By using the exact same script with the hugging face CLIP pretrained model, the embedding get computed as they should ``` processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32") tokenizer = CLIPTokenizerFast.from_pretrained("openai/clip-vit-base-patch32") ```
closed
completed
false
21
[ "PyTorch", "bug" ]
[]
2024-09-06T15:08:19Z
2026-02-13T15:28:22Z
2025-03-17T08:11:29Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
pezafar
48,569,151
MDQ6VXNlcjQ4NTY5MTUx
User
false
huggingface/transformers
2,522,954,925
I_kwDOCUB6oc6WYTyt
33,453
https://github.com/huggingface/transformers/issues/33453
https://api.github.com/repos/huggingface/transformers/issues/33453
Regression in tokenizer loading
### System Info There was a regression in commit b4727a1216bb21df2795e973063ed07202235d7e that prevents loading of some tokenizers. ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction 1. Install `transformers` 2. Run the following code: ```python from transformers import AutoTokenizer AutoTokenizer.from_pretrained("adsabs/astroBERT") ``` Error output: ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[2], [line 1](vscode-notebook-cell:?execution_count=2&line=1) ----> [1](vscode-notebook-cell:?execution_count=2&line=1) AutoTokenizer.from_pretrained("adsabs/astroBERT") ... AttributeError: add_special_tokens conflicts with the method add_special_tokens in BertTokenizerFast ``` ### Expected behavior It's my expectation that the above code should run and produce a working tokenizer. Or, if the tokenizer config is too old and needs to be updated there should be an error message & script/tool/API to guide the user to update the config.
closed
completed
false
19
[ "Core: Tokenization", "Fast Tokenizers", "bug" ]
[]
2024-09-12T17:24:24Z
2026-01-27T08:28:32Z
2024-12-27T08:09:37Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
JCRPaquin
1,820,796
MDQ6VXNlcjE4MjA3OTY=
User
false
huggingface/transformers
2,543,131,188
I_kwDOCUB6oc6XlRo0
33,666
https://github.com/huggingface/transformers/issues/33666
https://api.github.com/repos/huggingface/transformers/issues/33666
Qwen2-VL: Multi-GPU training
### System Info - `transformers` version: 4.45.0.dev0 - Platform: Linux-4.18.0-477.10.1.el8_8.x86_64-x86_64-with-glibc2.28 - Python version: 3.11.5 - Huggingface_hub version: 0.24.0 - Safetensors version: 0.4.3 - Accelerate version: 0.34.2 - Accelerate config: - compute_environment: LOCAL_MACHINE - distributed_type: NO - mixed_precision: bf16 - use_cpu: False - debug: False - num_processes: 1 - machine_rank: 0 - num_machines: 1 - gpu_ids: all - rdzv_backend: static - same_network: True - main_training_function: main - enable_cpu_affinity: False - downcast_bf16: no - tpu_use_cluster: False - tpu_use_sudo: False - tpu_env: [] - PyTorch version (GPU?): 2.2.1+rocm5.7 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: <fill in> - Using GPU in script?: <fill in> - GPU type: AMD Instinct MI250X ### Who can help? @muellerzr @ArthurZucker @gante Issue about both the Qwen-VL model and perhaps the trainer so not sure who is best suited to answer :) ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Replicating the setup is a bit tough, so this is more of a preliminary discussion issue to see if there is an obvious problem that surfaces. 1. Multi-GPU setup + Huggingface trainer 2. Train Qwen2-VL model with dynamic image resolution 3. The processor creates BatchEncodings with pixel_values, input_ids, attention_mask and image_grid_thw. 4. Run a model forward pass with the model in data parallel mode of the trainer. We observe that compared to mono-gpu setups, the rope values are disaligned with the hidden_states size. Typically, in line 1109 (Qwen2VisionTransformerPretrainedModel forward pass): ```python def forward(self, hidden_states: torch.Tensor, grid_thw: torch.Tensor) -> torch.Tensor: hidden_states = self.patch_embed(hidden_states) rotary_pos_emb = self.rot_pos_emb(grid_thw) ``` we can see rotary_pos_emb is hidden_states have a sligtly different dimension 0. ex: torch.Size([7820, 40]) torch.Size([7736, 1280]) Upon further inspection, we see rotary_pos_emb has the same dimension as what we would get in mono-gpu runs (normal since it only depends on the grid_thw argument). However, hidden_states (that correspond to pixel values) have a different size. This makes training crash: ```bash File "/lus/home/CT10/cad15443/mfaysse/colpali/venv/lib/python3.11/site-packages/transformers/models/qwen2_vl/modeling_qwen2_vl.py", line 395, in forward q = apply_rotary_pos_emb_vision(q.unsqueeze(0), rotary_pos_emb).squeeze(0) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/lus/home/CT10/cad15443/mfaysse/colpali/venv/lib/python3.11/site-packages/transformers/models/qwen2_vl/modeling_qwen2_vl.py", line 254, in apply_rotary_pos_emb_vision output = (tensor * cos) + (rotate_half(tensor) * sin) ~~~~~~~^~~~~ RuntimeError: The size of tensor a (7736) must match the size of tensor b (7808) at non-singleton dimension 1 ``` ### Expected behavior [edited] see below for more details being investigated Thanks !
open
null
false
10
[ "Distributed Training / Models", "trainer", "Feature request", "bug", "Vision", "Multimodal" ]
[]
2024-09-23T16:27:55Z
2026-02-06T08:47:55Z
null
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
ManuelFay
43,467,008
MDQ6VXNlcjQzNDY3MDA4
User
false
huggingface/transformers
2,576,436,738
I_kwDOCUB6oc6ZkU4C
34,046
https://github.com/huggingface/transformers/issues/34046
https://api.github.com/repos/huggingface/transformers/issues/34046
Support for torch._dynamo.export for Phi3
### Feature request Compared to `symbolic_trace`, the new (but I assume, experimental) entrypoint in `torch._dynamo.export` seems to provide a more robust way to extract modular FX graphs, that can't have any graph breaks. I have been experimenting with some networks (Pythia, OPT, Llama, Mistral), and they all go through. It seems that Phi3 breaks because of this line: https://github.com/huggingface/transformers/blob/36d410dab637c133f1bb706779c75d9021d403cf/src/transformers/models/phi3/modeling_phi3.py#L213 Where `self.inv_freq` is redefined at runtime in the forward pass. This is a bit confusing, and I would recommend to drop `self` and use a `normal` runtime variable. I'm not sure if this has potential side effects. A similar patter seems to be repeated in other Embedding classes in Phi3. To reproduce: ```python model = AutoModelForCausalLM.from_pretrained("microsoft/Phi-3.5-mini-instruct") model, guards = torch._dynamo.export(model)(**model.dummy_inputs) ``` @gante @ArthurZucker ### Motivation Dropping the reference to `self.inv_freq` would allow to obtain a fullgraph with dynamo. Having full FX graph is also a requirement for torch.export, although I have not tested that API. ### Your contribution I can't directly contribute with a PR at the moment. I could test a PR from my side to check compatibility with dynamo and potential side effects, once the PR is open.
open
null
false
3
[ "Feature request", "Deployment" ]
[]
2024-10-09T16:42:52Z
2026-03-12T14:04:03Z
null
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
Giuseppe5
18,719,316
MDQ6VXNlcjE4NzE5MzE2
User
false
huggingface/transformers
2,613,625,535
I_kwDOCUB6oc6byMK_
34,406
https://github.com/huggingface/transformers/issues/34406
https://api.github.com/repos/huggingface/transformers/issues/34406
Support dynamic batch size
### Feature request Hi thanks for the library! When training, I realize that, if a micro batch contains too few tokens, the throughput will be quite bad (i.e. average time per token is large). However, I cannot increase the batch size, because there are long (e.g. 2000 tokens) and short (e.g. 500 tokens) sequences in the training data. The batch size that make short sequences run fast will make long sequences OOM. Therefore, I am proposing to have dynamic (micro) batch size. For example, suppose we have batch_size=16. Then, before this proposal, we have e.g. micro_batch_size=2 & grad_accum=8. After this proposal, for short sequences, use 4 samples in this micro batch; for long sequences, use 2 samples in this micro batch. After they sum up to 16 samples, we can compute the loss and consider this step is done. ### Motivation (see above) ### Your contribution I am happy to PR
closed
completed
false
4
[ "trainer", "Feature request" ]
[]
2024-10-25T09:48:47Z
2026-03-19T13:16:20Z
2026-03-18T13:14:33Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
fzyzcjy
5,236,035
MDQ6VXNlcjUyMzYwMzU=
User
false
huggingface/transformers
2,629,405,187
I_kwDOCUB6oc6cuYoD
34,567
https://github.com/huggingface/transformers/issues/34567
https://api.github.com/repos/huggingface/transformers/issues/34567
TrainerState's property `num_input_tokens_seen` is not updating
### System Info ``` - `transformers` version: 4.46.0 - Python version: 3.10.15 - Huggingface_hub version: 0.26.1 - Safetensors version: 0.4.5 - Accelerate version: 1.0.1 - Accelerate config: not found - PyTorch version (GPU?): 2.5.0+cu124 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: <fill in> - Using GPU in script?: <fill in> - GPU type: NVIDIA A100 80GB PCIe ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Here is the sample code to reproduce the error ```python from transformers import TrainerCallback, TrainingArguments, Trainer from transformers import AutoTokenizer, AutoModelForCausalLM from datasets import Dataset import torch # Simple callback to monitor tokens class TokenMonitorCallback(TrainerCallback): def on_step_end(self, args, state, control, **kwargs): if state.global_step % 10 == 0: # Print every 10 steps print(f"Step {state.global_step}, Tokens seen: {state.num_input_tokens_seen}") def on_epoch_end(self, args, state, control, **kwargs): print(f"Epoch end - Total tokens processed: {state.num_input_tokens_seen}") # Create a tiny dataset texts = ["Hello world", "This is a test", "Another example"] * 10 dataset = Dataset.from_dict({"text": texts}) # Initialize model and tokenizer model_name = "distilgpt2" tokenizer = AutoTokenizer.from_pretrained(model_name) tokenizer.pad_token = tokenizer.eos_token model = AutoModelForCausalLM.from_pretrained(model_name) # Tokenization function def tokenize_function(examples): tokenized = tokenizer( examples["text"], padding="max_length", truncation=True, max_length=32, return_tensors="pt" ) # Create labels by shifting input_ids tokenized["labels"] = tokenized["input_ids"].clone() return tokenized # Tokenize dataset tokenized_dataset = dataset.map(tokenize_function, batched=True, remove_columns=dataset.column_names) # Training arguments training_args = TrainingArguments( output_dir="./test-trainer", num_train_epochs=2, per_device_train_batch_size=4, logging_steps=10, save_steps=1000, learning_rate=2e-5, report_to="none" ) # Initialize trainer trainer = Trainer( model=model, args=training_args, train_dataset=tokenized_dataset, callbacks=[TokenMonitorCallback()] ) # Start training trainer.train() ``` Following is the output ``` Epoch end - Total tokens processed: 0 Step 10, Tokens seen: 0 Epoch end - Total tokens processed: 0 TrainOutput(global_step=16, training_loss=5.371496677398682, metrics={'train_runtime': 56.2378, 'train_samples_per_second': 1.067, 'train_steps_per_second': 0.285, 'total_flos': 489931407360.0, 'train_loss': 5.371496677398682, 'epoch': 2.0}) ``` ### Expected behavior In the expected behaviour this property should be kept updating withing training loop with the number of input tokens seen on every step.
closed
completed
false
5
[ "bug" ]
[]
2024-11-01T16:30:39Z
2026-03-06T07:15:35Z
2024-11-04T07:47:03Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
SwayamInSync
74,960,567
MDQ6VXNlcjc0OTYwNTY3
User
false
huggingface/transformers
2,639,805,010
I_kwDOCUB6oc6dWDpS
34,634
https://github.com/huggingface/transformers/issues/34634
https://api.github.com/repos/huggingface/transformers/issues/34634
BarkProcessor voice_preset doesn't work
### System Info - `transformers` version: 4.47.0.dev0 - Platform: Windows-11-10.0.22631-SP0 - Python version: 3.12.7 - Huggingface_hub version: 0.26.2 - Safetensors version: 0.4.5 - Accelerate version: 1.1.0 - Accelerate config: not found - PyTorch version (GPU?): 2.5.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: <fill in> - Using GPU in script?: <fill in> - GPU type: NVIDIA GeForce RTX 4080 SUPER ### Who can help? @ylacombe ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction **Code:** from bark import SAMPLE_RATE, generate_audio, preload_models import sounddevice from transformers import BarkModel, BarkProcessor import torch import numpy as np from optimum.bettertransformer import BetterTransformer from scipy.io.wavfile import write as write_wav import re def barkspeed(text_prompt): processor = BarkProcessor.from_pretrained("suno/bark-small") model = BarkModel.from_pretrained("suno/bark-small", torch_dtype=torch.float16).to(device) model = BetterTransformer.transform(model, keep_original_model=False) model.enable_cpu_offload() sentences = re.split(r'[.?!]', text_prompt) pieces = [] for sentence in sentences: inp = processor(sentence.strip(), voice_preset=SPEAKER).to(device) audio = model.generate(**inp, do_sample=True, fine_temperature=0.4, coarse_temperature=0.5) audio = ((audio/torch.max(torch.abs(audio))).numpy(force=True).squeeze()*pow(2, 15)).astype(np.int16) pieces.append(audio) write_wav("bark_generation.wav", SAMPLE_RATE, np.concatenate(pieces)) sounddevice.play(np.concatenate(pieces), samplerate=24000) sounddevice.wait() **Error Message:** ****The attention mask is not set and cannot be inferred from input because pad token is same as eos token. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results. Traceback (most recent call last): File "F:\OllamaRAG\BarkUsage\BarkUsage.py", line 56, in <module> barkspeed("""Hey, have you heard about this new text-to-audio model called "Bark"? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\OllamaRAG\BarkUsage\BarkUsage.py", line 47, in barkspeed audio = model.generate(**inp, do_sample=True, fine_temperature=0.4, coarse_temperature=0.5) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\Program Files\anaconda3\envs\ollamaRAG\Lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "F:\Program Files\anaconda3\envs\ollamaRAG\Lib\site-packages\transformers\models\bark\modeling_bark.py", line 1737, in generate coarse_output = self.coarse_acoustics.generate( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\Program Files\anaconda3\envs\ollamaRAG\Lib\site-packages\transformers\models\bark\modeling_bark.py", line 1078, in generate semantic_output = torch.hstack([x_semantic_history, semantic_output]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument tensors in method wrapper_CUDA_cat) ### Expected behavior I used the code to generate some audio. Before I upgraded transformers and bark, the voice preset didn't work, bark kept changing preset. In the first half part of call function in Barkprocessor, it seemed fine, tensors were loaded properly. But in the generate function history_prompt was empty at first, then it was loaded as all 10000, After I upgraded transformers and bark, the error message shows. And after I delete the voice_preset=SPEAKER part, the code works, but with changing preset as well. Please could anyone tell me how I can get the preset to work.
closed
completed
false
11
[ "bug", "Audio" ]
[]
2024-11-07T04:01:37Z
2026-03-06T07:24:28Z
2025-07-18T13:14:47Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
etheryee
24,294,721
MDQ6VXNlcjI0Mjk0NzIx
User
false
huggingface/transformers
2,650,135,779
I_kwDOCUB6oc6d9dzj
34,689
https://github.com/huggingface/transformers/issues/34689
https://api.github.com/repos/huggingface/transformers/issues/34689
Transformers 4.46.2 breaks model loading for Llama 3.2 90B Vision Instruct
### System Info - `transformers` version: 4.46.2 - Platform: Linux-5.10.226-214.880.amzn2.x86_64-x86_64-with-glibc2.26 - Python version: 3.10.14 - Huggingface_hub version: 0.26.2 - Safetensors version: 0.4.5 - Accelerate version: 1.1.1 - Accelerate config: not found - PyTorch version (GPU?): 2.2.2 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: <fill in> - Using GPU in script?: <fill in> - GPU type: Tesla V100-SXM2-32GB ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Download weights for meta-llama/Llama-3.2-90B-Vision-Instruct 2. Load weights with the transformers 4.46.2 library installed This will result in `Some weights of MllamaForCausalLM were not initialized from the model checkpoint at meta-llama/Llama-3.2-90B-Vision-Instruct and are newly initialized: ['embed_tokens.weight', ...` ### Expected behavior With 4.45.2 I was able to load model weights with no issues. It seems like maybe the weights have different names in the Odict?
closed
completed
false
7
[ "bug" ]
[]
2024-11-11T18:58:37Z
2026-03-06T08:09:09Z
2024-11-25T10:20:21Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
iprivit
41,305,661
MDQ6VXNlcjQxMzA1NjYx
User
false
huggingface/transformers
2,658,784,395
I_kwDOCUB6oc6eedSL
34,733
https://github.com/huggingface/transformers/issues/34733
https://api.github.com/repos/huggingface/transformers/issues/34733
Better error message when loading adapter models with peft dependency missing
### Feature request Loading adapter models (such as https://huggingface.co/lightonai/MonoQwen2-VL-v0.1/tree/main) fails with an error message when peft isn't installed. The error message `OSError: lightonai/MonoQwen2-VL-v0.1 does not appear to have a file named pytorch_model.bin, model.safetensors, tf_model.h5, model.ckpt or flax_model.msgpack.` is a bit cryptic and requires the user to understand that - the model that will be loaded is a peft adapter - peft isn't installed in the current env To improve UX, it would be useful to show a different error message such as `"The model lightonai/MonoQwen2-VL-v0.1 is an adapter model. To load it, you need to install peft (hint: run `pip install peft`)".` ### Motivation Improve UX. The user may get the impression that the model repository is corrupted. ### Your contribution This feature should probably be implemented by core maintainers that are familiar with the internals of the model loading code.
closed
completed
false
2
[ "Feature request", "PEFT" ]
[]
2024-11-14T13:15:48Z
2026-02-03T13:26:06Z
2026-02-03T13:26:06Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
maxjeblick
24,281,881
MDQ6VXNlcjI0MjgxODgx
User
false
huggingface/transformers
2,691,898,113
I_kwDOCUB6oc6gcxsB
34,928
https://github.com/huggingface/transformers/issues/34928
https://api.github.com/repos/huggingface/transformers/issues/34928
Recomputed tensor size does not match when using activation checkpointing when using FSDP and accelerate
### System Info ``` - `transformers` version: 4.46.3 - Platform: Linux-6.8.0-1015-aws-x86_64-with-glibc2.35 - Python version: 3.12.6 - Huggingface_hub version: 0.26.2 - Safetensors version: 0.4.5 - Accelerate version: 1.1.1 - Accelerate config: not found - PyTorch version (GPU?): 2.5.1+cu124 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: distributed (`accelerate`) - Using GPU in script?: Yes - GPU type: NVIDIA A100-SXM4-40GB ``` ### Who can help? @muellerz @SunMarc @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I'm running into the following error while trying to use the SFTTrainer with FSDP and the `accelerate` library (full stack trace provided at the very bottom of this post). ``` torch.utils.checkpoint.CheckpointError: torch.utils.checkpoint: Recomputed values for the following tensors have different metadata than during the forward pass ``` This occurs when I set `gradient_checkpointing: false` and `activation_checkpointing: true`. Curiously, it actually seems to work if I set `gradient_checkpointing: true` and `activation_checkpointing: false`, **but** that produces the following warning message: ``` # When using FSDP full shard, instead of using `gradient_checkpointing` in TrainingArguments, please use `activation_checkpointing` in `fsdp_config`. The former introduces a redundant AllGather operation in backward pass. Reference: https://github.com/huggingface/transformers/issues/30404`. ``` There are a few related GitHub issues around that touch on this issue: 1. https://github.com/Lightning-AI/pytorch-lightning/issues/19267 2. https://github.com/huggingface/transformers/issues/28499 3. https://github.com/pytorch/pytorch/issues/124788 4. https://github.com/huggingface/transformers/issues/32073 One of these suggested setting `use_reentrant: true`, but that doesn't resolve the issue for me. I'm attempting to run this as a SageMaker training job using the official HuggingFace estimator (this amounts to the following command: `torchrun --nnodes 1 --nproc_per_node 8 train.py`. My training script is essentially a lightly adapted version of the official examples. Below is how I'm instantiating the HuggingFace estimator object: ``` huggingface_estimator = HuggingFace( entry_point = 'train.py', # train script #entry_point = 'launch.py', # train script dependencies=['requirements.txt'], source_dir = './', instance_type = 'ml.p4d.24xlarge', instance_count = 1, max_run = 2*24*60*60, base_job_name = job_name, role = role, volume_size = 1024, transformers_version = '4.36.0', pytorch_version = '2.1.0', py_version = 'py310', hyperparameters = { "config_s3_uri": "s3://<foo> }, #metric_definitions=metric_definitions, disable_output_compression = True, distribution={"torch_distributed": {"enabled": True}}, # enables torchrun environment = { "HUGGINGFACE_HUB_CACHE": "/tmp/.cache", "HF_TOKEN": HfFolder.get_token(), "ACCELERATE_USE_FSDP": "1", # enable FSDP "FSDP_CPU_RAM_EFFICIENT_LOADING": "0", # enable CPU RAM efficient loading "FSDP_AUTO_WRAP_POLICY": "TRANSFORMER_BASED_WRAP", "FSDP_BACKWARD_PREFETCH": "BACKWARD_PRE", "FSDP_STATE_DICT_TYPE": "FULL_STATE_DICT", "NCCL_TIMEOUT": "3600", # 1 hour timeout "NCCL_DEBUG": "WARN", "NCCL_IB_TIMEOUT": "3600", "NCCL_SOCKET_TIMEOUT": "3600", "NCCL_ASYNC_ERROR_HANDLING": "1", "NCCL_P2P_LEVEL": "NVL", "CUDA_DEVICE_MAX_CONNECTIONS": "1", "MAX_JOBS": "1", "PYTORCH_CUDA_ALLOC_CONF": "max_split_size_mb:512", "TORCH_DISTRIBUTED_DEBUG": "DETAIL", }, checkpoint_s3_uri=f's3://<foo>' ) ``` Below are some of the relevant parameters from my input config. ``` gradient_checkpointing: false gradient_checkpointing_kwargs: use_reentrant: true attn_implementation: "flash_attention_2" packing: false bf16: "auto" fsdp: "full_shard auto_wrap offload" fsdp_config: limit_all_gathers: true backward_prefetch: "backward_pre" forward_prefetch: "false" use_orig_params: "false" min_num_params: 0 activation_checkpointing: "true" ``` *Full Stack Trace* ``` Traceback (most recent call last): File "train.py", line 224, in <module> main(cfg) File "train.py", line 207, in main main(cfg) File "train.py", line 207, in main trainer.train() File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 2123, in train trainer.train() File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 2123, in train Traceback (most recent call last): File "train.py", line 224, in <module> main(cfg)main(cfg) File "train.py", line 207, in main trainer.train()trainer.train() File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 2123, in train return inner_training_loop( File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 2481, in _inner_training_loop return inner_training_loop( File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 2481, in _inner_training_loop return inner_training_loop(return inner_training_loop( File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 2481, in _inner_training_loop tr_loss_step = self.training_step(model, inputs, num_items_in_batch) File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 3612, in training_step tr_loss_step = self.training_step(model, inputs, num_items_in_batch) File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 3612, in training_step tr_loss_step = self.training_step(model, inputs, num_items_in_batch)tr_loss_step = self.training_step(model, inputs, num_items_in_batch) File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 3612, in training_step self.accelerator.backward(loss, **kwargs) File "/opt/conda/lib/python3.10/site-packages/accelerate/accelerator.py", line 2241, in backward Traceback (most recent call last): File "/opt/ml/code/train.py", line 224, in <module> main(cfg) File "/opt/ml/code/train.py", line 207, in main trainer.train() File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 2123, in train return inner_training_loop( File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 2481, in _inner_training_loop tr_loss_step = self.training_step(model, inputs, num_items_in_batch) File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 3612, in training_step self.accelerator.backward(loss, **kwargs) File "/opt/conda/lib/python3.10/site-packages/accelerate/accelerator.py", line 2241, in backward loss.backward(**kwargs) File "/opt/conda/lib/python3.10/site-packages/torch/_tensor.py", line 492, in backward torch.autograd.backward( File "/opt/conda/lib/python3.10/site-packages/torch/autograd/__init__.py", line 251, in backward Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass File "/opt/conda/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 1075, in unpack_hook frame.check_recomputed_tensors_match(gid) File "/opt/conda/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 850, in check_recomputed_tensors_match raise CheckpointError( torch.utils.checkpoint.CheckpointError: torch.utils.checkpoint: Recomputed values for the following tensors have different metadata than during the forward pass. tensor at position 18: saved metadata: {'shape': torch.Size([2, 1024, 28, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} recomputed metadata: {'shape': torch.Size([2, 2048, 28, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} tensor at position 19: saved metadata: {'shape': torch.Size([2, 1024, 28, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} recomputed metadata: {'shape': torch.Size([2, 2048, 28, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=0)} loss.backward(**kwargs) File "/opt/conda/lib/python3.10/site-packages/torch/_tensor.py", line 492, in backward loss.backward(**kwargs) File "/opt/conda/lib/python3.10/site-packages/torch/_tensor.py", line 492, in backward self.accelerator.backward(loss, **kwargs)self.accelerator.backward(loss, **kwargs) File "/opt/conda/lib/python3.10/site-packages/accelerate/accelerator.py", line 2241, in backward File "/opt/conda/lib/python3.10/site-packages/accelerate/accelerator.py", line 2241, in backward torch.autograd.backward( File "/opt/conda/lib/python3.10/site-packages/torch/autograd/__init__.py", line 251, in backward torch.autograd.backward( File "/opt/conda/lib/python3.10/site-packages/torch/autograd/__init__.py", line 251, in backward Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass File "/opt/conda/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 1075, in unpack_hook Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass File "/opt/conda/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 1075, in unpack_hook frame.check_recomputed_tensors_match(gid) File "/opt/conda/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 850, in check_recomputed_tensors_match loss.backward(**kwargs)loss.backward(**kwargs) File "/opt/conda/lib/python3.10/site-packages/torch/_tensor.py", line 492, in backward File "/opt/conda/lib/python3.10/site-packages/torch/_tensor.py", line 492, in backward frame.check_recomputed_tensors_match(gid) File "/opt/conda/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 850, in check_recomputed_tensors_match torch.autograd.backward(torch.autograd.backward( File "/opt/conda/lib/python3.10/site-packages/torch/autograd/__init__.py", line 251, in backward File "/opt/conda/lib/python3.10/site-packages/torch/autograd/__init__.py", line 251, in backward raise CheckpointError( torch.utils.checkpoint.CheckpointError: torch.utils.checkpoint: Recomputed values for the following tensors have different metadata than during the forward pass. tensor at position 18: saved metadata: {'shape': torch.Size([2, 1024, 28, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=2)} recomputed metadata: {'shape': torch.Size([2, 2048, 28, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=2)} tensor at position 19: saved metadata: {'shape': torch.Size([2, 1024, 28, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=2)} recomputed metadata: {'shape': torch.Size([2, 2048, 28, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=2)} Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward passVariable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass File "/opt/conda/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 1075, in unpack_hook File "/opt/conda/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 1075, in unpack_hook raise CheckpointError( torch.utils.checkpoint.CheckpointError: torch.utils.checkpoint: Recomputed values for the following tensors have different metadata than during the forward pass. tensor at position 18: saved metadata: {'shape': torch.Size([2, 1024, 28, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=1)} recomputed metadata: {'shape': torch.Size([2, 2048, 28, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=1)} tensor at position 19: saved metadata: {'shape': torch.Size([2, 1024, 28, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=1)} recomputed metadata: {'shape': torch.Size([2, 2048, 28, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=1)} frame.check_recomputed_tensors_match(gid)frame.check_recomputed_tensors_match(gid) File "/opt/conda/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 850, in check_recomputed_tensors_match File "/opt/conda/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 850, in check_recomputed_tensors_match raise CheckpointError( torch.utils.checkpoint.CheckpointError: torch.utils.checkpoint: Recomputed values for the following tensors have different metadata than during the forward pass. tensor at position 18: saved metadata: {'shape': torch.Size([2, 1024, 28, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=3)} recomputed metadata: {'shape': torch.Size([2, 2048, 28, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=3)} tensor at position 19: saved metadata: {'shape': torch.Size([2, 1024, 28, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=3)} recomputed metadata: {'shape': torch.Size([2, 2048, 28, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=3)} raise CheckpointError( torch.utils.checkpoint.CheckpointError: torch.utils.checkpoint: Recomputed values for the following tensors have different metadata than during the forward pass. tensor at position 18: saved metadata: {'shape': torch.Size([2, 1024, 28, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=6)} recomputed metadata: {'shape': torch.Size([2, 2048, 28, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=6)} tensor at position 19: saved metadata: {'shape': torch.Size([2, 1024, 28, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=6)} recomputed metadata: {'shape': torch.Size([2, 2048, 28, 128]), 'dtype': torch.bfloat16, 'device': device(type='cuda', index=6)} 0%| | 0/100 [00:13<?, ?it/s] [E ProcessGroupGloo.cpp:138] Rank 5 successfully reached monitoredBarrier, but received errors while waiting for send/recv from rank 0. Please check rank 0 logs for faulty rank. [E ProcessGroupGloo.cpp:138] Rank 4 successfully reached monitoredBarrier, but received errors while waiting for send/recv from rank 0. Please check rank 0 logs for faulty rank. [E ProcessGroupGloo.cpp:138] Rank 7 successfully reached monitoredBarrier, but received errors while waiting for send/recv from rank 0. Please check rank 0 logs for faulty rank. [2024-11-25 18:39:43,758] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 69 closing signal SIGTERM [2024-11-25 18:39:43,758] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 73 closing signal SIGTERM [2024-11-25 18:39:43,758] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 74 closing signal SIGTERM [2024-11-25 18:39:43,758] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 76 closing signal SIGTERM [2024-11-25 18:39:47,931] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 1) local_rank: 1 (pid: 70) of binary: /opt/conda/bin/python ``` ### Expected behavior The expected behavior is for the SFTTrainer's `train()` method to run without errors.
open
reopened
false
41
[ "bug" ]
[]
2024-11-25T19:02:12Z
2026-03-03T18:28:52Z
null
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
jjbuck
12,192,842
MDQ6VXNlcjEyMTkyODQy
User
false
huggingface/transformers
2,770,869,698
I_kwDOCUB6oc6lKB3C
35,532
https://github.com/huggingface/transformers/issues/35532
https://api.github.com/repos/huggingface/transformers/issues/35532
RagTokenizer Missing patch_token_id, patch_token, and encode Functionality
### Feature request I propose adding the following functionalities to the RagTokenizer in the Hugging Face Transformers library: Support for patch_token_id and patch_token attributes: These attributes are essential for specifying special tokens that can be used during tokenization, particularly for Retrieval-Augmented Generation (RAG) models. Implementation of the encode function: This function is critical for converting input text into token IDs, which are a standard input for Transformer-based models. These additions would bring RagTokenizer in line with other tokenizers in the library, making it easier to use in preprocessing pipelines for training and inference. Paper reference: [RAG: Retrieval-Augmented Generation](https://arxiv.org/abs/2005.11401) Current RagTokenizer documentation: Hugging Face Transformers ### Motivation The absence of the patch_token_id, patch_token, and encode functionalities in RagTokenizer introduces several limitations: It is challenging to preprocess data for RAG models without a way to specify and use special tokens like patch_token. The lack of an encode function makes it cumbersome to tokenize text into input IDs, which is a critical step for training and inference. This is a deviation from the expected behavior of tokenizers in the Transformers library. This can cause confusion and inefficiency for users accustomed to the functionality available in other tokenizers like BertTokenizer or GPT2Tokenizer. Addressing these issues will make RagTokenizer more consistent with the rest of the library and improve usability in RAG-related workflows. ### Your contribution I am willing to contribute by: Submitting a Pull Request (PR) to implement these functionalities, given guidance on the expected behavior and the existing code structure. Writing unit tests to verify the behavior of the patch_token_id, patch_token, and encode functionalities. Updating the documentation to reflect these changes. Let me know if this aligns with your vision for the RagTokenizer, and I’d be happy to assist further! cc @ArthurZucker @itazap
open
null
false
4
[ "Good Second Issue", "Feature request" ]
[]
2025-01-06T15:13:29Z
2026-01-24T21:37:07Z
null
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
hanshengzhu0001
74,083,194
MDQ6VXNlcjc0MDgzMTk0
User
false
huggingface/transformers
2,772,466,699
I_kwDOCUB6oc6lQHwL
35,545
https://github.com/huggingface/transformers/issues/35545
https://api.github.com/repos/huggingface/transformers/issues/35545
ModernBERT export to onnx error
### System Info - `transformers` version: 4.48.0.dev0 - Platform: Linux-5.15.0-84-generic-x86_64-with-glibc2.35 - Python version: 3.11.11 - Huggingface_hub version: 0.27.0 - Safetensors version: 0.4.5 - Accelerate version: 1.2.1 - Accelerate config: not found - PyTorch version (GPU?): 2.5.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: <fill in> - Using GPU in script?: <fill in> - GPU type: NVIDIA GeForce RTX 4090 ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction When I trained a classification model based on ModernBERT tried to export to onnx with the following script. ``` import torch from transformers import AutoTokenizer, AutoModelForSequenceClassification def export(): tokenizer = AutoTokenizer.from_pretrained("answerdotai/ModernBERT-base", model_max_length=4096) model = AutoModelForSequenceClassification.from_pretrained( "./checkpoints", num_labels=3, # reference_compile=False, ) model.eval() samples = ['examples'] tokenized = tokenizer(samples, return_tensors='pt', max_length=tokenizer.model_max_length, padding='max_length', truncation=True) input_ids = tokenized['input_ids'].to('cuda') attention_mask = tokenized['attention_mask'].to('cuda') model = model.to('cuda') with torch.no_grad(): torch.onnx.export( model, (input_ids, attention_mask), './model.onnx', input_names=["input_ids", "attention_mask"], output_names=["logits"], ) if __name__ == '__main__': export() ``` Got errors. May Be related https://github.com/pytorch/pytorch/issues/104748 ``` You are attempting to use Flash Attention 2.0 without specifying a torch dtype. This might lead to unexpected behaviour You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`. Flash Attention 2.0 only supports torch.float16 and torch.bfloat16 dtypes, but the current dype in ModernBertForSequenceClassification is torch.float32. You should run training or inference using Automatic Mixed-Precision via the `with torch.autocast(device_type='torch_device'):` decorator, or load the model with the `torch_dtype` argument. Example: `model = AutoModel.from_pretrained("openai/whisper-tiny", attn_implementation="flash_attention_2", torch_dtype=torch.float16)` /miniconda3/envs/bert/lib/python3.11/site-packages/transformers/models/modernbert/modeling_modernbert.py:711: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! max_seqlen_in_batch = int(seqlens_in_batch.max().item()) Traceback (most recent call last): File "/modernBERT/export_onnx.py", line 39, in <module> export() File "/modernBERT/export_onnx.py", line 28, in export torch.onnx.export( File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/onnx/__init__.py", line 375, in export export( File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/onnx/utils.py", line 502, in export _export( File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/onnx/utils.py", line 1564, in _export graph, params_dict, torch_out = _model_to_graph( ^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/onnx/utils.py", line 1113, in _model_to_graph graph, params, torch_out, module = _create_jit_graph(model, args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/onnx/utils.py", line 997, in _create_jit_graph graph, torch_out = _trace_and_get_graph_from_model(model, args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/onnx/utils.py", line 904, in _trace_and_get_graph_from_model trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 632, in _fn return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/jit/_trace.py", line 1500, in _get_trace_graph outs = ONNXTracedModule( ^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/jit/_trace.py", line 139, in forward graph, out = torch._C._create_graph_by_tracing( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/jit/_trace.py", line 130, in wrapper outs.append(self.inner(*trace_inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1726, in _slow_forward result = self.forward(*input, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/transformers/models/modernbert/modeling_modernbert.py", line 1160, in forward outputs = self.model( ^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1726, in _slow_forward result = self.forward(*input, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/transformers/models/modernbert/modeling_modernbert.py", line 895, in forward hidden_states = self.embeddings(input_ids) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1726, in _slow_forward result = self.forward(*input, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/transformers/models/modernbert/modeling_modernbert.py", line 210, in forward self.compiled_embeddings(input_ids) File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 444, in _fn raise RuntimeError( RuntimeError: Detected that you are using FX to torch.jit.trace a dynamo-optimized function. This is not supported at the moment. ``` https://huggingface.co/answerdotai/ModernBERT-base/discussions/10 When I read this post I modified part of the code as follows. ``` model = AutoModelForSequenceClassification.from_pretrained( "./checkpoints", num_labels=3, reference_compile=False, ) ``` I got another error. ``` You are attempting to use Flash Attention 2.0 without specifying a torch dtype. This might lead to unexpected behaviour You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`. Flash Attention 2.0 only supports torch.float16 and torch.bfloat16 dtypes, but the current dype in ModernBertForSequenceClassification is torch.float32. You should run training or inference using Automatic Mixed-Precision via the `with torch.autocast(device_type='torch_device'):` decorator, or load the model with the `torch_dtype` argument. Example: `model = AutoModel.from_pretrained("openai/whisper-tiny", attn_implementation="flash_attention_2", torch_dtype=torch.float16)` /miniconda3/envs/bert/lib/python3.11/site-packages/transformers/models/modernbert/modeling_modernbert.py:711: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! max_seqlen_in_batch = int(seqlens_in_batch.max().item()) /miniconda3/envs/bert/lib/python3.11/site-packages/flash_attn/ops/triton/rotary.py:166: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! assert sin.shape == cos.shape /miniconda3/envs/bert/lib/python3.11/site-packages/flash_attn/ops/triton/rotary.py:168: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! assert rotary_dim <= headdim, "rotary_dim must be <= headdim" /miniconda3/envs/bert/lib/python3.11/site-packages/flash_attn/ops/triton/rotary.py:169: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! assert headdim <= 256, "Only support headdim <= 256" /miniconda3/envs/bert/lib/python3.11/site-packages/flash_attn/ops/triton/rotary.py:170: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! assert seqlen_ro >= seqlen, "seqlen_ro must be >= seqlen" /miniconda3/envs/bert/lib/python3.11/site-packages/flash_attn/ops/triton/rotary.py:185: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! assert seqlen_offsets + seqlen <= seqlen_ro /miniconda3/envs/bert/lib/python3.11/site-packages/flash_attn/ops/triton/rotary.py:188: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if rotary_dim < headdim and not inplace: /miniconda3/envs/bert/lib/python3.11/site-packages/flash_attn/ops/triton/rotary.py:193: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if rotary_dim <= 32 /miniconda3/envs/bert/lib/python3.11/site-packages/flash_attn/ops/triton/rotary.py:194: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! else (64 if rotary_dim <= 64 else (128 if rotary_dim <= 128 else 256)) /miniconda3/envs/bert/lib/python3.11/site-packages/flash_attn/ops/triton/rotary.py:197: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! BLOCK_M = 4 if interleaved else (8 if rotary_dim <= 128 else 4) Traceback (most recent call last): File "/modernBERT/export_onnx.py", line 39, in <module> export() File "/modernBERT/export_onnx.py", line 28, in export torch.onnx.export( File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/onnx/__init__.py", line 375, in export export( File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/onnx/utils.py", line 502, in export _export( File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/onnx/utils.py", line 1564, in _export graph, params_dict, torch_out = _model_to_graph( ^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/onnx/utils.py", line 1113, in _model_to_graph graph, params, torch_out, module = _create_jit_graph(model, args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/onnx/utils.py", line 997, in _create_jit_graph graph, torch_out = _trace_and_get_graph_from_model(model, args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/onnx/utils.py", line 904, in _trace_and_get_graph_from_model trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 632, in _fn return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/jit/_trace.py", line 1500, in _get_trace_graph outs = ONNXTracedModule( ^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/jit/_trace.py", line 139, in forward graph, out = torch._C._create_graph_by_tracing( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/jit/_trace.py", line 130, in wrapper outs.append(self.inner(*trace_inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1726, in _slow_forward result = self.forward(*input, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/transformers/models/modernbert/modeling_modernbert.py", line 1160, in forward outputs = self.model( ^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1726, in _slow_forward result = self.forward(*input, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/transformers/models/modernbert/modeling_modernbert.py", line 913, in forward layer_outputs = encoder_layer( ^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1726, in _slow_forward result = self.forward(*input, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/transformers/models/modernbert/modeling_modernbert.py", line 529, in forward attn_outputs = self.attn( ^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1726, in _slow_forward result = self.forward(*input, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/transformers/models/modernbert/modeling_modernbert.py", line 487, in forward attn_outputs = MODERNBERT_ATTENTION_FUNCTION[self.config._attn_implementation]( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/transformers/models/modernbert/modeling_modernbert.py", line 349, in flash_attention_forward qkv = rotary_emb(qkv, cu_seqlens=cu_seqlens, max_seqlen=max_seqlen) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1726, in _slow_forward result = self.forward(*input, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/transformers/models/modernbert/modeling_modernbert.py", line 178, in forward qkv = apply_rotary_unpadded( ^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/transformers/models/modernbert/modeling_modernbert.py", line 136, in apply_rotary_unpadded return ApplyRotaryEmbUnpad.apply(qkv, cos, sin, cu_seqlens, max_seqlen) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/torch/autograd/function.py", line 575, in apply return super().apply(*args, **kwargs) # type: ignore[misc] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/transformers/models/modernbert/modeling_modernbert.py", line 75, in forward apply_rotary( File "/miniconda3/envs/bert/lib/python3.11/site-packages/flash_attn/ops/triton/rotary.py", line 202, in apply_rotary rotary_kernel[grid]( File "/miniconda3/envs/bert/lib/python3.11/site-packages/triton/runtime/jit.py", line 345, in <lambda> return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/triton/runtime/jit.py", line 662, in run kernel = self.compile( ^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/triton/compiler/compiler.py", line 276, in compile module = src.make_ir(options, codegen_fns, context) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/bert/lib/python3.11/site-packages/triton/compiler/compiler.py", line 113, in make_ir return ast_to_ttir(self.fn, self, context=context, options=options, codegen_fns=codegen_fns) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ triton.compiler.errors.CompilationError: at 32:22: # Meta-parameters BLOCK_K: tl.constexpr, IS_SEQLEN_OFFSETS_TENSOR: tl.constexpr, IS_VARLEN: tl.constexpr, INTERLEAVED: tl.constexpr, CONJUGATE: tl.constexpr, BLOCK_M: tl.constexpr, ): pid_m = tl.program_id(axis=0) pid_batch = tl.program_id(axis=1) pid_head = tl.program_id(axis=2) rotary_dim_half = rotary_dim // 2 ^ IncompatibleTypeErrorImpl('invalid operands of type pointer<int64> and triton.language.int32') ``` ### Expected behavior export to model.onnx
closed
completed
false
10
[ "bug", "ONNX" ]
[]
2025-01-07T10:26:26Z
2026-02-12T22:04:20Z
2025-01-14T10:22:09Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
wakaka6
48,764,488
MDQ6VXNlcjQ4NzY0NDg4
User
false
huggingface/transformers
2,789,302,377
I_kwDOCUB6oc6mQWBp
35,707
https://github.com/huggingface/transformers/issues/35707
https://api.github.com/repos/huggingface/transformers/issues/35707
Issue with Progressive Generation Using inputs_embeds and past_key_values
### System Info - `transformers` version: 4.46.3 - Platform: Linux-6.8.0-48-generic-x86_64-with-glibc2.17 - Python version: 3.8.20 - Huggingface_hub version: 0.26.1 - Safetensors version: 0.4.5 - Accelerate version: 1.0.1 - Accelerate config: not found - PyTorch version (GPU?): 2.4.1+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: no - Using GPU in script?: yes - GPU type: NVIDIA RTX A6000 ### Who can help? @gante ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction I am currently rewriting the generate_progressively function for my custom model class. My goal is to enable the model to generate results progressively by concatenating the initial input_ids with each element of the compress_outputs sequence in turn. Specifically: 1. In the first iteration, the model generates results by concatenating input_ids with the first element of compress_outputs. 2. In the second iteration, it concatenates input_ids with the first and second elements of compress_outputs (the first two elements) to generate results. 3. This process continues until the last element of the compress_outputs sequence is included. To improve efficiency, I want to leverage caching, as the majority of the concatenated input in each iteration has already been used to compute past_key_values. Below is the code snippet for the function I implemented. In this context, self.model refers to mistral-7b-chat-v0.2. ``` @torch.no_grad() def generate_progressively( self, input_ids, attention_mask, compress_outputs, **kwargs, ): results = [] compress_output_count = compress_outputs.size(1) batch_size = input_ids.size(0) inputs_embs = self.base.model.embed_tokens(input_ids) prompt_cache = DynamicCache() outputs = self.model( input_ids=input_ids, attention_mask=attention_mask, use_cache=True, past_key_values=prompt_cache, ) prompt_cache = outputs.past_key_values for compress_ind in range(compress_output_count): current_compress_outputs = compress_outputs[:, compress_ind: compress_ind+1, :].type_as(input_ids) outputs = self.model( input_ids=None, inputs_embeds=current_compress_outputs, use_cache=True, past_key_values=prompt_cache, ) prompt_cache = outputs.past_key_values inputs_embs = torch.cat([inputs_embs, current_compress_outputs], dim=1) attention_mask = torch.cat([attention_mask, torch.ones(batch_size, 1, device=input_ids.device)], dim=1) generated_outputs = self.base.generate( inputs_embeds=inputs_embs, attention_mask=attention_mask, use_cache=True, past_key_values=prompt_cache, return_dict_in_generate=True, **kwargs, ) results.append(generated_outputs.sequences) return results ``` When I execute this code, the program throws an error during execution. The error occurs at line 393 in transformers/generation/utils.py, specifically in the prepare_inputs_for_generation function. The problematic line of code is: ``` if inputs_embeds is not None and cache_position[0] == 0: ``` The error message is: IndexError: index 0 is out of bounds for dimension 0 with size 0. I track the excution of the code and here’s a detailed breakdown of the issue: The error occurs in transformers/generation/utils.py. Initially, the program enters the self._sample function and then proceeds to the self._get_initial_cache_position function. Within this function, the following line: ``` if not is_torchdynamo_compiling(): cache_position = cache_position[past_length:] ``` causes the correct cache_position slice to become empty, resulting in an IndexError in subsequent steps. Even if I manage to fix the issue with cache_position, another problem arises later in the self.prepare_inputs_for_generation function. The relevant code is as follows: ``` if not self.config.is_encoder_decoder: if inputs_embeds is not None and cache_position[0] == 0: model_inputs[input_ids_key] = None model_inputs["inputs_embeds"] = inputs_embeds else: model_inputs[input_ids_key] = input_ids.clone(memory_format=torch.contiguous_format) model_inputs["inputs_embeds"] = None ``` In my case, I provide only inputs_embeds and past_key_values, and since cache_position[0] is not 0, the code attempts to set model_inputs[input_ids_key] using input_ids. However, since input_ids is None, this results in further issues. Under the current implementation of the generate function in transformers, is it possible to use only inputs_embeds and past_key_values for generation? How can I modify my implementation to achieve progressive generation with caching as intended? Are there specific guidelines for correctly managing cache_position and ensuring compatibility with inputs_embeds? ### Expected behavior My primary objective is to progressively generate outputs by leveraging caching (past_key_values) to improve efficiency.
closed
completed
false
18
[ "bug", "Generation" ]
[]
2025-01-15T09:39:18Z
2026-02-18T22:48:47Z
2025-03-26T08:06:29Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
Superbooming
45,536,749
MDQ6VXNlcjQ1NTM2NzQ5
User
false
huggingface/transformers
2,825,179,039
I_kwDOCUB6oc6oZM-f
36,002
https://github.com/huggingface/transformers/issues/36002
https://api.github.com/repos/huggingface/transformers/issues/36002
Mismatch Between Image Tokens and Features in LLaVA Model Fine-Tuning
**Model: llava-hf/llava-1.5-7b-hf** **Issue Description** When I try to generate a response using the fine-tuned model, I encounter the following error: ValueError: Image features and image tokens do not match: tokens: 575, features: 576 This error occurs during the generate() call, indicating a mismatch between the number of image tokens and image features. **Steps I’ve Taken:** - Image Preprocessing: - I resized the input image to dimensions that are multiples of the patch_size (14 for LLaVA models). - The image is resized to 518x336, which is a multiple of 14. **Processor Configuration:** - I manually set the patch_size and vision_feature_select_strategy in the processor to match the model's configuration. - I verified that the processor's configuration is correct. **Debugging Inputs:** - I printed the inputs (input_ids, pixel_values, etc.) to ensure they are correctly formatted. - The inputs are moved to the GPU for processing. **Model Loading:** The model and processor are loaded from the fine-tuned directory, and the model is moved to the GPU. Code Snippet Here’s the relevant part of my code: ` ``` #Load the processor and model processor1 = LlavaProcessor.from_pretrained(model_path) new_model_v1 = LlavaForConditionalGeneration.from_pretrained(model_path).to("cuda:0") #Resize the image patch_size = new_model_v1.config.vision_config.patch_size shortest_edge = processor1.image_processor.size.get("shortest_edge", 336) original_width, original_height = raw_image.size scale_factor = shortest_edge / min(original_width, original_height) new_width = int(original_width * scale_factor) new_height = int(original_height * scale_factor) new_width = (new_width // patch_size) * patch_size new_height = (new_height // patch_size) * patch_size raw_image = raw_image.resize((new_width, new_height)) # Resized to multiples of patch_size #Process inputs inputs = processor1(images=raw_image, text=prompt, return_tensors='pt') inputs = {k: v.to("cuda:0") for k, v in inputs.items()} #Generate response output = new_model_v1.generate( input_ids=inputs["input_ids"], pixel_values=inputs["pixel_values"], max_new_tokens=200, do_sample=False ) ``` **Questions:** - Why is there a mismatch between the number of image tokens (575) and image features (576)? - Is there a mistake in my image preprocessing or model configuration that could cause this issue? - How can I ensure that the number of image tokens matches the number of image features? - Are there any additional steps I need to take to align the image tokens and features correctly? **Additional Information** - I have tried the Hugging Face transformers library both version 4.48.2 and 4.48.2. - The fine-tuned model is saved in Google Drive, and I’m loading it in a new Colab session. - The base model is llava-hf/llava-1.5-7b-hf. <img width="885" alt="Image" src="https://github.com/user-attachments/assets/444c2286-629d-4385-85df-6c485a27f076" /> <img width="418" alt="Image" src="https://github.com/user-attachments/assets/3a24ce8b-09ae-4a36-b54f-3de0946619b8" />
closed
completed
false
12
[]
[]
2025-02-01T12:15:54Z
2026-01-31T13:45:00Z
2025-02-02T08:36:57Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
Md-Nasif03
164,668,292
U_kgDOCdCjhA
User
false
huggingface/transformers
2,826,780,037
I_kwDOCUB6oc6ofT2F
36,010
https://github.com/huggingface/transformers/issues/36010
https://api.github.com/repos/huggingface/transformers/issues/36010
ImportError: cannot import name 'GenerationMixin' from 'transformers.generation'
### System Info - `transformers` version: 4.47.1 - Platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.39 - Python version: 3.11.11 - Huggingface_hub version: 0.28.1 - Safetensors version: 0.5.2 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.6.0+cu124 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: <fill in> - Using GPU in script?: <fill in> - GPU type: NVIDIA GeForce RTX 3090 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction Running Usage example on this [repo](https://github.com/segment-any-text/wtpsplit) Run the codes on new minicoda virtual environmet only after `pip install wtpsplit` and `pip install torch`. `--------------------------------------------------------------------------- ImportError Traceback (most recent call last) Cell In[2], line 3 1 from wtpsplit import SaT ----> 3 sat = SaT("sat-3l") 4 # optionally run on GPU for better performance 5 # also supports TPUs via e.g. sat.to("xla:0"), in that case pass `pad_last_batch=True` to sat.split 6 sat.half().to("cuda") File ~/miniforge3/envs/sat/lib/python3.11/site-packages/wtpsplit/__init__.py:514, in SaT.__init__(self, model_name_or_model, from_pretrained_kwargs, ort_providers, ort_kwargs, style_or_domain, language, lora_path, hub_prefix) 511 except ModuleNotFoundError: 512 raise ValueError("Please install `torch` to use WtP with a PyTorch model.") --> 514 import wtpsplit.models # noqa 516 self.model = PyTorchWrapper( 517 AutoModelForTokenClassification.from_pretrained( 518 model_name_to_fetch, **(from_pretrained_kwargs or {}) 519 ) 520 ) 521 # LoRA LOADING File ~/miniforge3/envs/sat/lib/python3.11/site-packages/wtpsplit/models.py:13 8 from transformers import AutoModel, AutoModelForTokenClassification 9 from transformers.modeling_outputs import ( 10 BaseModelOutputWithPoolingAndCrossAttentions, 11 BaseModelOutputWithPastAndCrossAttentions, 12 ) ---> 13 from transformers.modeling_utils import ModuleUtilsMixin 14 from transformers.models.bert.modeling_bert import BertEncoder, BertForTokenClassification, BertModel, BertPooler 15 from transformers.models.canine.modeling_canine import ( 16 _PRIMES, 17 ACT2FN, (...) 33 TokenClassifierOutput, 34 ) File ~/miniforge3/envs/sat/lib/python3.11/site-packages/transformers/modeling_utils.py:46 44 from .configuration_utils import PretrainedConfig 45 from .dynamic_module_utils import custom_object_save ---> 46 from .generation import CompileConfig, GenerationConfig, GenerationMixin 47 from .integrations import PeftAdapterMixin, deepspeed_config, is_deepspeed_zero3_enabled 48 from .loss.loss_utils import LOSS_MAPPING ImportError: cannot import name 'GenerationMixin' from 'transformers.generation' (/home/bering-gpu-3/miniforge3/envs/sat/lib/python3.11/site-packages/transformers/generation/__init__.py)` ### Expected behavior There shouldn't be no ImportError and print `['This is a test ', 'This is another test']`
closed
completed
false
3
[ "bug" ]
[]
2025-02-03T08:40:41Z
2026-03-05T04:04:29Z
2025-03-15T08:04:24Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
sij411
77,471,503
MDQ6VXNlcjc3NDcxNTAz
User
false
huggingface/transformers
2,830,956,433
I_kwDOCUB6oc6ovPeR
36,032
https://github.com/huggingface/transformers/issues/36032
https://api.github.com/repos/huggingface/transformers/issues/36032
T5 Tokenzier not load with `AttributeError: add_special_tokens conflicts with the method add_special_tokens in T5Tokenizer`
### System Info - `transformers` version: 4.48.2 - Platform: Linux-5.10.0-33-cloud-amd64-x86_64-with-glibc2.31 - Python version: 3.12.0 - Huggingface_hub version: 0.28.1 - Safetensors version: 0.5.2 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.6.0+cu118 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: No - Using GPU in script?: No - GPU type: NVIDIA A100-SXM4-40GB ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction Running this script should reproduce the error ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM model_name = "StonyBrookNLP/teabreac-preasm-large-drop" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) model = AutoModelForSeq2SeqLM.from_pretrained(model_name) ``` It fails with ```zsh --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[6], line 2 1 model_name = "StonyBrookNLP/teabreac-preasm-large-drop" ----> 2 tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) 3 model = AutoModelForSeq2SeqLM.from_pretrained(model_name) 4 # enable_digit_tokenization(tokenizer) File ~/my/path/lib/python3.12/site-packages/transformers/models/auto/tokenization_auto.py:934, in AutoTokenizer.from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs) 930 if tokenizer_class is None: 931 raise ValueError( 932 f"Tokenizer class {tokenizer_class_candidate} does not exist or is not currently imported." 933 ) --> 934 return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) 936 # Otherwise we have to be creative. 937 # if model is an encoder decoder, the encoder tokenizer class is used by default 938 if isinstance(config, EncoderDecoderConfig): File ~/my/path/lib/python3.12/site-packages/transformers/tokenization_utils_base.py:2036, in PreTrainedTokenizerBase.from_pretrained(cls, pretrained_model_name_or_path, cache_dir, force_download, local_files_only, token, revision, trust_remote_code, *init_inputs, **kwargs) 2033 else: 2034 logger.info(f"loading file {file_path} from cache at {resolved_vocab_files[file_id]}") -> 2036 return cls._from_pretrained( 2037 resolved_vocab_files, 2038 pretrained_model_name_or_path, 2039 init_configuration, 2040 *init_inputs, 2041 token=token, 2042 cache_dir=cache_dir, 2043 local_files_only=local_files_only, 2044 _commit_hash=commit_hash, 2045 _is_local=is_local, 2046 trust_remote_code=trust_remote_code, 2047 **kwargs, 2048 ) File ~/my/path/lib/python3.12/site-packages/transformers/tokenization_utils_base.py:2276, in PreTrainedTokenizerBase._from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, token, cache_dir, local_files_only, _commit_hash, _is_local, trust_remote_code, *init_inputs, **kwargs) 2274 # Instantiate the tokenizer. 2275 try: -> 2276 tokenizer = cls(*init_inputs, **init_kwargs) 2277 except import_protobuf_decode_error(): 2278 logger.info( 2279 "Unable to load tokenizer model from SPM, loading from TikToken will be attempted instead." 2280 "(Google protobuf error: Tried to load SPM model with non-SPM vocab file).", 2281 ) File ~/my/path/lib/python3.12/site-packages/transformers/models/t5/tokenization_t5.py:189, in T5Tokenizer.__init__(self, vocab_file, eos_token, unk_token, pad_token, extra_ids, additional_special_tokens, sp_model_kwargs, legacy, add_prefix_space, **kwargs) 186 self._extra_ids = extra_ids 187 self.add_prefix_space = add_prefix_space --> 189 super().__init__( 190 eos_token=eos_token, 191 unk_token=unk_token, 192 pad_token=pad_token, 193 extra_ids=extra_ids, 194 additional_special_tokens=additional_special_tokens, 195 sp_model_kwargs=self.sp_model_kwargs, 196 legacy=legacy, 197 add_prefix_space=add_prefix_space, 198 **kwargs, 199 ) File ~/my/path/lib/python3.12/site-packages/transformers/tokenization_utils.py:435, in PreTrainedTokenizer.__init__(self, **kwargs) 432 self._added_tokens_encoder: Dict[str, int] = {k.content: v for v, k in self._added_tokens_decoder.items()} 434 # 4 init the parent class --> 435 super().__init__(**kwargs) 437 # 4. If some of the special tokens are not part of the vocab, we add them, at the end. 438 # the order of addition is the same as self.SPECIAL_TOKENS_ATTRIBUTES following `tokenizers` 439 self._add_tokens( 440 [token for token in self.all_special_tokens_extended if token not in self._added_tokens_encoder], 441 special_tokens=True, 442 ) File ~/my/path/lib/python3.12/site-packages/transformers/tokenization_utils_base.py:1407, in PreTrainedTokenizerBase.__init__(self, **kwargs) 1405 for key in kwargs: 1406 if hasattr(self, key) and callable(getattr(self, key)): -> 1407 raise AttributeError(f"{key} conflicts with the method {key} in {self.__class__.__name__}") 1409 self.init_kwargs = copy.deepcopy(kwargs) 1410 self.name_or_path = kwargs.pop("name_or_path", "") AttributeError: add_special_tokens conflicts with the method add_special_tokens in T5Tokenizer ``` ### Expected behavior I expected the tokenizer to load. Two similar issues were raised before but they did not solve my problem. 1. https://github.com/huggingface/transformers/issues/33453 2. https://github.com/salesforce/CodeGen/issues/94
closed
completed
false
10
[ "bug" ]
[]
2025-02-04T18:06:41Z
2026-03-02T08:12:23Z
2026-03-02T08:12:23Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
utkarsh-fileread
171,386,284
U_kgDOCjclrA
User
false
huggingface/transformers
2,859,319,686
I_kwDOCUB6oc6qbcGG
36,246
https://github.com/huggingface/transformers/issues/36246
https://api.github.com/repos/huggingface/transformers/issues/36246
ImportError: cannot import name 'Qwen2_5_VLImageProcessor' from 'transformers.models.qwen2_5_vl' (/usr/local/lib/python3.10/dist-packages/transformers/models/qwen2_5_vl/__init__.py)
### System Info vllm==0.7.2 transformers==4.49.0 ### Who can help? ImportError: cannot import name 'Qwen2_5_VLImageProcessor' from 'transformers.models.qwen2_5_vl' (/usr/local/lib/python3.10/dist-packages/transformers/models/qwen2_5_vl/__init__.py) ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction from vllm import LLM model = LLM(model="Qwen2.5-VL-7B, trust_remote_code=True, dtype="bfloat16") ### Expected behavior I confirm the version:4.49.0_dev contains a script named image_processing_qwen2_5_vl.py, but version 4.49.0 remove it. ImportError: cannot import name 'Qwen2_5_VLImageProcessor' from 'transformers.models.qwen2_5_vl' (/usr/local/lib/python3.10/dist-packages/transformers/models/qwen2_5_vl/__init__.py)
closed
completed
false
5
[ "bug" ]
[]
2025-02-18T04:33:33Z
2026-01-25T23:28:19Z
2025-02-18T17:02:07Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
LaoWangGB
144,193,886
U_kgDOCJg5Xg
User
false
huggingface/transformers
2,865,407,905
I_kwDOCUB6oc6qyqeh
36,296
https://github.com/huggingface/transformers/issues/36296
https://api.github.com/repos/huggingface/transformers/issues/36296
tensor parallel training bug
### System Info transformers:4.45.dev0 python:3.11 linux ### Who can help? #34194 ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction torchrun --nnodes 1 --nproc_per_node 2 --master_port 27654 run_clm.py \ --model_name_or_path TinyLlama/TinyLlama-1.1B-Chat-v1.0 \ --dataset_name wikitext \ --dataset_config_name wikitext-2-raw-v1 \ --per_device_train_batch_size 1 \ --per_device_eval_batch_size 1 \ --do_train \ --do_eval \ --tp_size 2 \ --output_dir /tmp/test-clm **unexpected behavior:** runtimeerror: aten._foreach_norm_Scalar: got mixed torch.tensor and DTensor, need to convert all torch.tensor to DTensor before calling distributed operators. ### Expected behavior autoTP training
closed
completed
false
7
[ "bug" ]
[]
2025-02-20T08:15:10Z
2026-03-02T15:39:51Z
2025-03-31T08:04:28Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
iMountTai
35,353,688
MDQ6VXNlcjM1MzUzNjg4
User
false
huggingface/transformers
2,869,102,621
I_kwDOCUB6oc6rAwgd
36,331
https://github.com/huggingface/transformers/issues/36331
https://api.github.com/repos/huggingface/transformers/issues/36331
TypeError: CustomTrainer.compute_loss() got an unexpected keyword argument 'num_items_in_batch'
### System Info - `transformers` version: 4.50.0.dev0 - Platform: Linux-5.15.0-210.163.7.el8uek.x86_64-x86_64-with-glibc2.35 - Python version: 3.10.16 - Huggingface_hub version: 0.29.1 - Safetensors version: 0.5.2 - Accelerate version: 1.4.0 - Accelerate config: not found - DeepSpeed version: 0.16.3 - PyTorch version (GPU?): 2.6.0+cu124 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: NO - Using GPU in script?: YES - GPU type: NVIDIA A100-SXM4-80GB ### Who can help? @muellerzr @SunMarc ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction My code: ```python code trainer = CustomTrainer( model=model, train_dataset=torch_format_dataset, eval_dataset=torch_format_dataset, args=training_args, data_collator=custom_data_collator, ) model.config.use_cache = False # silence the warnings. Please re-enable for inference! trainer.train() ``` Error Info: ``` [2025-02-20 19:14:49,033] [INFO] [real_accelerator.py:222:get_accelerator] Setting ds_accelerator to cuda (auto detect) num_devices: 1 max_steps: 1250 /opt/saturncloud/envs/tofu/lib/python3.10/site-packages/transformers/training_args.py:1609: FutureWarning: `evaluation_strategy` is deprecated and will be removed in version 4.46 of 🤗 Transformers. Use `eval_strategy` instead warnings.warn( [2025-02-20 19:14:50,775] [INFO] [comm.py:652:init_distributed] cdb=None [2025-02-20 19:14:50,775] [INFO] [comm.py:683:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl Using the `WANDB_DISABLED` environment variable is deprecated and will be removed in v5. Use the --report_to flag to control the integrations used for logging result (for instance --report_to none). [2025-02-20 19:14:50,903] [INFO] [config.py:733:__init__] Config mesh_device None world_size = 1 [2025-02-20 19:14:51,687] [INFO] [partition_parameters.py:348:__exit__] finished initializing model - num_params = 341, num_elems = 1.42B [2025-02-20 19:15:23,644] [WARNING] [engine.py:1244:_do_optimizer_sanity_check] **** You are using ZeRO with an untested optimizer, proceed with caution ***** Parameter Offload: Total persistent parameters: 544768 in 194 params 0%| | 0/1250 [00:00<?, ?it/s]Error executing job with overrides: ['split=full', 'batch_size=4', 'gradient_accumulation_steps=4', 'model_family=phi', 'lr=2e-5'] Traceback (most recent call last): File "/home/jovyan/mu-benchmark/finetune.py", line 125, in main trainer.train() File "/opt/saturncloud/envs/tofu/lib/python3.10/site-packages/transformers/trainer.py", line 2243, in train return inner_training_loop( File "/opt/saturncloud/envs/tofu/lib/python3.10/site-packages/transformers/trainer.py", line 2554, in _inner_training_loop tr_loss_step = self.training_step(model, inputs, num_items_in_batch) File "/opt/saturncloud/envs/tofu/lib/python3.10/site-packages/transformers/trainer.py", line 3704, in training_step loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch) TypeError: CustomTrainer.compute_loss() got an unexpected keyword argument 'num_items_in_batch' ``` Data Set: [https://huggingface.co/datasets/locuslab/TOFU](https://huggingface.co/datasets/locuslab/TOFU) ### Expected behavior Why would there be a compute_loss() error? I never gave a `num_items_in_batch` argument.
closed
completed
false
13
[ "bug" ]
[]
2025-02-21T13:58:12Z
2026-03-06T10:49:50Z
2025-06-01T08:04:12Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
ruidazeng
31,152,346
MDQ6VXNlcjMxMTUyMzQ2
User
false
huggingface/transformers
2,914,781,972
I_kwDOCUB6oc6tvAsU
36,683
https://github.com/huggingface/transformers/issues/36683
https://api.github.com/repos/huggingface/transformers/issues/36683
AttributeError: 'Gemma3Config' object has no attribute 'vocab_size'
### System Info v4.50.0.dev0 ### Who can help? @ArthurZucker @LysandreJik @xenova ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I am trying to run the new Gemma3 model, using version '4.50.0.dev0'. When loading the model I get the error: 'Gemma3Config' object has no attribute 'vocab_size'. Looking into this it seems Gemma3Config has `vocab_size` nested in a "text_config" attribute. I try to load the model as AutoModelForCausalLM, running it with Gemma3ForConditionalGeneration does not raise this issue. Am I wrong in assuming I can run Gemma 3 as AutoModelForCausalLM? ### Expected behavior Loading the model as AutoModelForCausalLM.from_pretrained without issue.
open
reopened
false
39
[ "bug" ]
[]
2025-03-12T18:11:39Z
2026-03-23T13:36:49Z
null
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
jumelet
9,407,977
MDQ6VXNlcjk0MDc5Nzc=
User
false
huggingface/transformers
2,931,161,181
I_kwDOCUB6oc6utfhd
36,817
https://github.com/huggingface/transformers/issues/36817
https://api.github.com/repos/huggingface/transformers/issues/36817
Add EuroBert Model To Config
### Model description I would like to have the EuroBert model added to the config (configuration_auto.py) :) Especially the 210M version: https://huggingface.co/EuroBERT This would probably solve an issue in Flair: https://github.com/flairNLP/flair/issues/3630 ``` File "C:\Users\nick\PycharmProjects\flair\.venv\Lib\site-packages\flair\embeddings\transformer.py", line 1350, in from_params config_class = CONFIG_MAPPING[model_type] ~~~~~~~~~~~~~~^^^^^^^^^^^^ File "C:\Users\nick\PycharmProjects\flair\.venv\Lib\site-packages\transformers\models\auto\configuration_auto.py", line 794, in __getitem__ raise KeyError(key) KeyError: 'eurobert' ``` ### Open source status - [x] The model implementation is available - [x] The model weights are available ### Provide useful links for the implementation @tomaarsen
open
null
false
2
[ "New model" ]
[]
2025-03-19T09:56:20Z
2026-02-24T09:37:27Z
null
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
zynos
8,973,150
MDQ6VXNlcjg5NzMxNTA=
User
false
huggingface/transformers
2,947,704,577
I_kwDOCUB6oc6vsmcB
36,979
https://github.com/huggingface/transformers/issues/36979
https://api.github.com/repos/huggingface/transformers/issues/36979
[Community contributions] Model cards
Hey friends! 👋 We are currently in the process of improving the Transformers model cards by making them more directly useful for everyone. The main goal is to: 1. Standardize all model cards with a consistent format so users know what to expect when moving between different model cards or trying to learn how to use a new model. 2. Include a brief description of the model (what makes it unique/different) written in a way that's accessible to everyone. 3. Provide ready to use code examples featuring the `Pipeline`, `AutoModel`, and `transformers-cli` with available optimizations included. For large models, provide a quantization example so its easier for everyone to run the model. 4. Include an attention mask visualizer for currently supported models to help users visualize what a model is seeing (refer to #36630) for more details. Compare the before and after model cards below: ![Image](https://github.com/user-attachments/assets/590de86f-cfd2-4dd0-9167-83b7d19d858a) With so many models in Transformers, we could really use some a hand with standardizing the existing model cards. If you're interested in making a contribution, pick a model from the list below and then you can get started! ## Steps Each model card should follow the format below. You can copy the text exactly as it is! ```md # add appropriate badges <div style="float: right;"> <div class="flex flex-wrap space-x-1"> <img alt="" src="" > </div> </div> # Model name [Model name](https://huggingface.co/papers/...) ... A brief description of the model and what makes it unique/different. Try to write this like you're talking to a friend. You can find all the original [Model name] checkpoints under the [Model name](link) collection. > [!TIP] > This model was contributed by [author](link to Hub profile). > > Click on the [Model name] models in the right sidebar for more examples of how to apply [Model name] to different [insert task types here] tasks. The example below demonstrates how to [insert task here] with [`Pipeline`] or the [`AutoModel`] class. <hfoptions id="usage"> <hfoption id="Pipeline> insert pipeline code here </hfoption> <hfoption id="AutoModel"> add AutoModel code here </hfoption> <hfoption id="transformers-cli"> add transformers-cli usage here if applicable/supported, otherwise close the hfoption block </hfoption> </hfoptions> Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the [Quantization](../quantization/overview) overview for more available quantization backends. The example below uses [insert quantization method here](link to quantization method) to only quantize the weights to __. # add if this is supported for your model Use the [AttentionMaskVisualizer](https://github.com/huggingface/transformers/blob/beb9b5b02246b9b7ee81ddf938f93f44cfeaad19/src/transformers/utils/attention_visualizer.py#L139) to better understand what tokens the model can and cannot attend to. \```py from transformers.utils.attention_visualizer import AttentionMaskVisualizer visualizer = AttentionMaskVisualizer("google/gemma-3-4b-it") visualizer("<img>What is shown in this image?") \``` # upload image to https://huggingface.co/datasets/huggingface/documentation-images/tree/main/transformers/model_doc and ping me to merge <div class="flex justify-center"> <img src=""/> </div> ## Notes - Any other model-specific notes should go here. \```py <insert relevant code snippet here related to the note if its available> \ ``` ## Resources add links to external resources only ``` For examples, take a look at #36469 or the [BERT](https://huggingface.co/docs/transformers/main/en/model_doc/bert), [Llama](https://huggingface.co/docs/transformers/main/en/model_doc/llama), [Llama 2](https://huggingface.co/docs/transformers/main/en/model_doc/llama2), [Gemma](https://huggingface.co/docs/transformers/main/en/model_doc/gemma3) 3, [PaliGemma](https://huggingface.co/docs/transformers/main/en/model_doc/paligemma), [ViT](https://huggingface.co/docs/transformers/main/en/model_doc/vit), and [Whisper](https://huggingface.co/docs/transformers/main/en/model_doc/whisper) model cards on the `main` version of the docs. Once you're done or if you have any questions, feel free to ping @stevhliu to review. Don't add `fix` to your PR to avoid closing this issue. I'll also be right there working alongside you and opening PRs to convert the model cards so we can complete this faster together! 🤗 ## Models - [x] albert - #37753 - [x] align - #38072 - [x] altclip - #38306 - [x] aria - #38472 - [ ] audio_spectrogram_transformer - assigned to @KishanPipariya - [ ] autoformer.- #37231 - [x] aya_vision - #38749 - [x] bamba - #38853 - [ ] bark - assigned to @mdhvgoyal - [x] bart - #37858 - [x] barthez - #39701 - [x] bartpho - #40051 - [ ] beit - assigned to @parthchopra07 - [x] bert - [x] bert_generation - #40250 - [ ] bert_japanese - #39466 - [x] bertweet - #37981 - [x] big_bird - #37959 - [x] bigbird_pegasus - #39104 - [x] biogpt - #38214 - [ ] bit - [ ] blenderbot - assigned to @Diksha-Maurya - [ ] blenderbot_small - [x] blip. - #38513 - [ ] blip_2 - assigned to @olccihyeon - [ ] bloom - assigned to @AreebAhmad-02 - [ ] bridgetower - [ ] bros - [x] byt5 - #38699 - [x] camembert - #39227 - [x] canine - #38631 - [ ] chameleon - assigned to @ankitgpt18 - [ ] chinese_clip - [x] clap - #39738 - [x] clip - #37040 - [ ] clipseg - assigned to @dataWizard7957 - [ ] clvp - [x] code_llama - #37115 - [ ] codegen - #40471 - [x] cohere - #37056 - [x] cohere2 - #39604 - [x] colpali - #37309 - [ ] conditional_detr - [ ] convbert - #38470 - [ ] convnext - assigned to @Rklearns - [ ] convnextv2 - #40589 - [ ] cpm - [ ] cpmant - [ ] ctrl - assigned to @Ishubhammohole - [x] cvt - #38731 - [ ] dab_detr - [ ] dac - [ ] data2vec - assigned to @boy397 - [ ] dbrx - [x] deberta - #37409 - [x] deberta_v2 - #38895 - [ ] decision_transformer - [x] deformable_detr - #39902 - [ ] deit - [x] depth_anything - #37065 - [ ] depth_pro - [x] detr - #39822 - [ ] dialogpt - assigned to @Uvi-12 - [ ] diffllama - [ ] dinat - [x] dinov2 - #37104 - [ ] dinov2_with_registers - [x] distilbert - #37157 - [x] dit - #38721 - [x] donut - #37290 - [ ] dpr - [ ] dpt - [x] efficientloftr - #39620 - [ ] efficientnet - assigned to @Sudhesh-Rajan27 - [x] electra - #37063 - [ ] emu3 - [ ] encodec - [x] encoder_decoder - #39272 - [x] ernie - #39657 - [ ] esm - [x] falcon - #37184 - [x] falcon_mamba - #37253 - [ ] fastspeech2_conformer - #37377 - [ ] flaubert - [ ] flava - [ ] fnet - [ ] focalnet - [ ] fsmt - [ ] funnel - [ ] fuyu - [x] gemma - #37674 - [x] gemma2 - #37076 - [x] gemma3 - [ ] git - assigned to @Big-Marvel - [ ] glm - [ ] glpn - [ ] got_ocr2 - [x] gpt2 - #37101 - [ ] gpt_bigcode - #40615 - [x] gpt_neo - #38505 - [ ] gpt_neox - #38550 - [x] gpt_neox_japanese - #39862 - [ ] gpt_sw3 - [ ] gptj - #40404 - [x] granite - #37791 - [ ] granitemoe - assigned to @cassiasamp - [ ] granitemoeshared - [x] grounding_dino - #37925 - [ ] groupvit - assigned to @shreya888 - [ ] helium - [ ] herbert - [x] hgnetv2 - #39965 - [ ] hiera - [x] hubert - #39742 - [ ] ibert - [ ] idefics - assigned to @rraghavkaushik - [ ] idefics2 - [ ] idefics3 - [x] ijepa - #39354 - [ ] imagegpt - [ ] informer - [ ] instructblip - [ ] instructblipvideo - [x] jamba - #37152 - [ ] jetmoe - #40749 - [ ] kosmos2 - [x] layoutlm - #40129 - [ ] layoutlmv2 - [ ] layoutlmv3 - #37155 - [ ] layoutxlm - [x] led - #39233 - [ ] levit - [x] lightglue - #39407 - [ ] lilt - [x] llama - [x] llama2 - [ ] llama3 - assigned to @capnmav77 - [ ] llava - assigned to @itsmejul - [x] llava_next - #38894 - [ ] llava_next_video - [ ] llava_onevision - [x] longformer - #37622 - [ ] longt5 - [ ] luke - [ ] lxmert - [ ] m2m_100 - [x] mamba - #37863 - [x] mamba2 - #37951 - [x] marian - #39138 - [ ] markuplm - [ ] mask2former - [ ] maskformer - [x] mbart - #37619 - [x] mbart50 - #37619 - [ ] megatron_bert - #40568 - [ ] megatron_gpt2 - [ ] mgp_str - [x] mimi - #39824 - [x] mistral - #37156 - [x] mistral3 - #39531 - [ ] mixtral - assigned to @darmasrmez - [ ] mllama - #37647 - [ ] mluke - [x] mobilebert - #37256 - [x] mobilenet_v1 - #37948 - [x] mobilenet_v2 - #37948 - [x] mobilevit - #40033 - [ ] mobilevitv2 - [x] modernbert - #37052 - [x] modernbertdecoder - #39453 - [x] moonshine - #38711 - [ ] moshi - [ ] mpnet - assigned to @SanjayDevarajan03 - [ ] mpt - [ ] mra - [x] mt5 - #39702 - [ ] musicgen - [ ] musicgen_melody - #38955 - [ ] mvp - [ ] myt5 - [ ] nemotron - [x] nllb - #40074 - [ ] nllb_moe - [ ] nougat - [ ] nystromformer - [x] olmo - #40233 - [x] olmo2 - #38394 - [x] olmoe - #39344 - [ ] omdet_turbo - [ ] oneformer - [x] openai - #37255 - [x] opt - #39568 - [ ] owlv2 - [ ] owlvit - [x] paligemma - [ ] patchtsmixer - [ ] patchtst - [x] pegasus - #38675 - [x] pegasus_x - #38971 - [ ] perceiver - [ ] persimmon - [x] phi - #37583 - [ ] phi3 - assigned to @arpitsinghgautam - [x] phi4_multimodal - #38830 - [ ] phimoe - [ ] phobert - [ ] pix2struct - [ ] pixtral - #40442 - [ ] plbart - [ ] poolformer - [ ] pop2piano - [ ] prompt_depth_anything - [ ] prophetnet - assigned to @SahanaMark - [ ] pvt - [ ] pvt_v2 - [x] qwen2 - #37192 - [x] qwen2_5_vl - #37099 - [ ] qwen2_audio - [x] qwen2_moe - #38649 - [ ] qwen2_vl - assigned to @SaiSanthosh1508 - [x] rag - #40222 - [ ] recurrent_gemma - [ ] reformer - [ ] regnet - [ ] rembert - [ ] resnet - assigned to @BettyChen0616 - [x] roberta - #38777 - [ ] roberta_prelayernorm - assigned to @Yuvraj-Dhepe - [x] roc_bert - #38835 - [x] roformer - #37946 - [ ] rt_detr - [ ] rt_detr_v2 - [ ] rwkv - [ ] sam - #40578 - [ ] seamless_m4t - [ ] seamless_m4t_v2 - [x] segformer - #40417 - [ ] seggpt - [ ] sew - [ ] sew_d - [ ] shieldgemma2 - assigned to @BryanBradfo - [x] siglip - #37585 - [x] siglip2 - #37624 - [ ] smolvlm - assigned to @udapy - [ ] speech_encoder_decoder - [ ] speech_to_text - [ ] speecht5 - assigned to @HemanthSai7 - [ ] splinter - [ ] squeezebert - [ ] stablelm - [ ] starcoder2 - #40737 - [x] superglue - #39406 - [x] superpoint - #38896 - [ ] swiftformer - [ ] swin - assigned to @BryanBradfo - [ ] swin2sr - [x] swinv2 - #37942 - [x] switch_transformers - #39305 - [x] t5 - #37261 - [ ] table_transformer - [ ] tapas - [ ] textnet - [ ] time_series_transformer - [ ] timesformer - assigned to @mreraser - [ ] timm_backbone - [ ] timm_wrapper - [x] trocr - #40240 - [ ] tvp - [ ] udop - [ ] umt5 - [ ] unispeech - [ ] unispeech_sat - [ ] univnet - [ ] upernet - [ ] video_llava - [ ] videomae - #40573 - [ ] vilt - [ ] vipllava - [ ] vision_encoder_decoder - assigned to @Bhavay-2001 - [ ] vision_text_dual_encoder - [x] visual_bert - #40057 - [x] vit - [x] vit_mae - #38302 - [ ] vit_msn - assigned to @ChirayuXD - [ ] vitdet - [ ] vitmatte - [x] vitpose - #38630 - [ ] vitpose_backbone - [x] vits - #37335 - [ ] vivit - assigned to @mreraser - [ ] wav2vec2 - #38956 - [ ] wav2vec2_bert - #38957 - [ ] wav2vec2_conformer - #38958 - [ ] wav2vec2_phoneme - #38959 - [ ] wav2vec2_with_lm - assigned to @AshAnand34 - [ ] wavlm - #40047 - [x] whisper - [ ] x_clip - [ ] xglm - [x] xlm - #38595 - [x] xlm_roberta - #38596 - [x] xlm_roberta_xl - #38597 - [ ] xlnet - assigned to @vellankis-space - [ ] xmod - [x] yolos - #39528 - [ ] yoso - [ ] zamba - assigned to @devkade - [ ] zamba2 - assigned to @devkade - [x] zoedepth - #37898
closed
completed
false
195
[ "Good First Issue", "Good First Documentation Issue", "contributions-welcome" ]
[]
2025-03-25T20:39:10Z
2026-03-10T09:38:29Z
2025-09-17T15:03:55Z
MEMBER
null
20260325T173244Z
2026-03-25T17:32:44Z
stevhliu
59,462,357
MDQ6VXNlcjU5NDYyMzU3
User
false
huggingface/transformers
2,986,242,010
I_kwDOCUB6oc6x_m_a
37,428
https://github.com/huggingface/transformers/issues/37428
https://api.github.com/repos/huggingface/transformers/issues/37428
ImportError: cannot import name '_flash_supports_window_size' from 'transformers.modeling_flash_attention_utils'
### System Info Hi there, I'm using tridao's flash attention and I'm running into an import error with the transformers library: ``` File "/g/g14/venkatraman2/glm/glm/train/training.py", line 34, in <module> from glm.train.train_wrapper_registry import train_wrapper_registry File "/g/g14/venkatraman2/glm/glm/train/train_wrapper_registry.py", line 1, in <module> from .baseformer_train import BaseFormerWrapper File "/g/g14/venkatraman2/glm/glm/train/baseformer_train.py", line 9, in <module> from ..model.model_registry import model_registry File "/g/g14/venkatraman2/glm/glm/model/model_registry.py", line 1, in <module> from .esm3s import ESM3s File "/g/g14/venkatraman2/glm/glm/model/esm3s.py", line 36, in <module> from ring_flash_attn.ring_flash_attn_varlen import ( File "/p/vast1/OpenFoldCollab/genome_lm/envs/glm_rocm6_3_1_re/lib/python3.12/site-packages/ring_flash_attn/__init__.py", line 37, in <module> from .adapters import ( File "/p/vast1/OpenFoldCollab/genome_lm/envs/glm_rocm6_3_1_re/lib/python3.12/site-packages/ring_flash_attn/adapters/__init__.py", line 1, in <module> from .hf_adapter import ( File "/p/vast1/OpenFoldCollab/genome_lm/envs/glm_rocm6_3_1_re/lib/python3.12/site-packages/ring_flash_attn/adapters/hf_adapter.py", line 9, in <module> from transformers.modeling_flash_attention_utils import ( ImportError: cannot import name '_flash_supports_window_size' from 'transformers.modeling_flash_attention_utils' (/p/vast1/OpenFoldCollab/genome_lm/envs/glm_rocm6_3_1_re/lib/python3.12/site-packages/transformers/modeling_flash_attention_utils.py) ``` Do you have any suggestions regarding how to resolve this? Others have encountered this as well: https://github.com/Dao-AILab/flash-attention/issues/1491 Thank you! ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Steps to reproduce the behavior: 1. Import the following, after having cloned `ring_flash_attn` from the tridao repository. ``` from ring_flash_attn.ring_flash_attn_varlen import ( ring_flash_attn_varlen_kvpacked_func, )``` ### Expected behavior I would expect '_flash_supports_window_size' to be importable from 'transformers.modeling_flash_attention_utils'
closed
completed
false
5
[ "bug" ]
[]
2025-04-10T16:30:59Z
2026-02-09T14:37:12Z
2025-05-20T08:02:52Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
mv2731
60,421,398
MDQ6VXNlcjYwNDIxMzk4
User
false
huggingface/transformers
3,036,862,351
I_kwDOCUB6oc61AteP
37,934
https://github.com/huggingface/transformers/issues/37934
https://api.github.com/repos/huggingface/transformers/issues/37934
Is Llama4TextL2Norm meant to be RMS norm?
https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama4/modeling_llama4.py#L118 ``` x * torch.rsqrt(x.pow(2).mean(-1, keepdim=True) + self.eps) ``` This is just the rms norm?
closed
completed
false
2
[]
[]
2025-05-02T21:28:05Z
2026-03-10T06:05:09Z
2025-06-11T08:02:45Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
0x6b64
105,229,704
U_kgDOBkWtiA
User
false
huggingface/transformers
3,054,509,355
I_kwDOCUB6oc62EB0r
38,066
https://github.com/huggingface/transformers/issues/38066
https://api.github.com/repos/huggingface/transformers/issues/38066
`AutoModel.from_pretrained(...)` (with explicit `device_map` unset) fails under `with torch.device("meta")` with PyTorch 2.6.0 and 2.7.0
```python # from torch.nn.attention.flex_attention import BlockMask, flex_attention from transformers import AutoModel import torch with torch.device('meta'): AutoModel.from_pretrained('Qwen/Qwen2.5-0.5B', trust_remote_code=True) ```` I found this code in the wild in https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/blob/f6d1ec77ce2ce18f3d925a1014c9e4d6b4ad3072/orz/ppo/actors.py#L745-L746 (linked issue https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/issues/71) fails with: ``` Sliding Window Attention is enabled but not implemented for `sdpa`; unexpected results may be encountered. --------------------------------------------------------------------------- NotImplementedError Traceback (most recent call last) [<ipython-input-1-00ba4c43be18>](https://localhost:8080/#) in <cell line: 0>() 4 5 with torch.device('meta'): ----> 6 AutoModel.from_pretrained('Qwen/Qwen2.5-0.5B', trust_remote_code=True) 6 frames [/usr/local/lib/python3.11/dist-packages/transformers/models/auto/auto_factory.py](https://localhost:8080/#) in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 569 if model_class.config_class == config.sub_configs.get("text_config", None): 570 config = config.get_text_config() --> 571 return model_class.from_pretrained( 572 pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, **kwargs 573 ) [/usr/local/lib/python3.11/dist-packages/transformers/modeling_utils.py](https://localhost:8080/#) in _wrapper(*args, **kwargs) 277 old_dtype = torch.get_default_dtype() 278 try: --> 279 return func(*args, **kwargs) 280 finally: 281 torch.set_default_dtype(old_dtype) [/usr/local/lib/python3.11/dist-packages/transformers/modeling_utils.py](https://localhost:8080/#) in from_pretrained(cls, pretrained_model_name_or_path, config, cache_dir, ignore_mismatched_sizes, force_download, local_files_only, token, revision, use_safetensors, weights_only, *model_args, **kwargs) 4397 offload_index, 4398 error_msgs, -> 4399 ) = cls._load_pretrained_model( 4400 model, 4401 state_dict, [/usr/local/lib/python3.11/dist-packages/transformers/modeling_utils.py](https://localhost:8080/#) in _load_pretrained_model(cls, model, state_dict, checkpoint_files, pretrained_model_name_or_path, ignore_mismatched_sizes, sharded_metadata, device_map, disk_offload_folder, offload_state_dict, dtype, hf_quantizer, keep_in_fp32_regex, device_mesh, key_mapping, weights_only) 4831 # Skip it with fsdp on ranks other than 0 4832 elif not (is_fsdp_enabled() and not is_local_dist_rank_0() and not is_quantized): -> 4833 disk_offload_index, cpu_offload_index = _load_state_dict_into_meta_model( 4834 model_to_load, 4835 state_dict, [/usr/local/lib/python3.11/dist-packages/torch/utils/_contextlib.py](https://localhost:8080/#) in decorate_context(*args, **kwargs) 114 def decorate_context(*args, **kwargs): 115 with ctx_factory(): --> 116 return func(*args, **kwargs) 117 118 return decorate_context [/usr/local/lib/python3.11/dist-packages/transformers/modeling_utils.py](https://localhost:8080/#) in _load_state_dict_into_meta_model(model, state_dict, shard_file, expected_keys, reverse_renaming_mapping, device_map, disk_offload_folder, disk_offload_index, cpu_offload_folder, cpu_offload_index, hf_quantizer, is_safetensors, keep_in_fp32_regex, unexpected_keys, device_mesh) 822 param_device = "cpu" if is_local_dist_rank_0() else "meta" 823 --> 824 _load_parameter_into_model(model, param_name, param.to(param_device)) 825 826 else: [/usr/local/lib/python3.11/dist-packages/torch/utils/_device.py](https://localhost:8080/#) in __torch_function__(self, func, types, args, kwargs) 102 if func in _device_constructors() and kwargs.get('device') is None: 103 kwargs['device'] = self.device --> 104 return func(*args, **kwargs) 105 106 # NB: This is directly called from C++ in torch/csrc/Device.cpp NotImplementedError: Cannot copy out of meta tensor; no data! ``` Also, unless uncommenting the first line, it also fails on 2.6.0 with `RuntimeError: Tensor.item() cannot be called on meta tensors`: - https://github.com/pytorch/pytorch/issues/153330
closed
completed
false
10
[]
[]
2025-05-10T20:35:19Z
2026-02-03T16:32:34Z
2025-07-12T08:03:15Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
vadimkantorov
1,041,752
MDQ6VXNlcjEwNDE3NTI=
User
false
huggingface/transformers
3,068,593,888
I_kwDOCUB6oc625wbg
38,175
https://github.com/huggingface/transformers/issues/38175
https://api.github.com/repos/huggingface/transformers/issues/38175
Unexpected Zero Probabilities with siglip2-base-patch16-224 Model
### System Info ``` transformers version: 4.51.3 Platform: Linux Python version: 3.10.14 PyTorch version (GPU?): 2.2.2 (CUDA available: True) Huggingface Hub version: 0.31.2 Safetensors version: 0.5.3 Accelerate version: 1.7.0 Accelerate config: Not configured TensorFlow version (GPU?): Not installed Flax version (CPU?/GPU?/TPU?): Not installed JAX version: Not installed JAXLib version: Not installed ``` ### Who can help? @amyeroberts , @qubvel ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction Here's the code snippet that reproduces the issue: ``` from PIL import Image import requests from transformers import AutoProcessor, AutoModel import torch model = AutoModel.from_pretrained("google/siglip2-base-patch16-224") processor = AutoProcessor.from_pretrained("google/siglip2-base-patch16-224") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) texts = ["a photo of 2 cats", "a photo of 2 dogs"] # Important: we pass `padding=max_length` since the model was trained with this inputs = processor(text=texts, images=image, padding="max_length", return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) logits_per_image = outputs.logits_per_image probs = torch.sigmoid(logits_per_image) # These are the probabilities print(f"{probs[0][0]:.1%} that image 0 is '{texts[0]}'") ``` ### Expected behavior Expected behavior: ``` 31.9% that image 0 is 'a photo of 2 cats' ``` Actual behavior: ``` 0.0% that image 0 is 'a photo of 2 cats' ``` Additional context: - The issue persists across multiple runs and environments. - No modifications have been made to the model or processor. - The problem arises when using the official example scripts provided in the documentation.
closed
completed
false
4
[ "bug" ]
[]
2025-05-16T10:18:19Z
2026-02-12T08:50:16Z
2025-05-30T13:57:13Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
Magician6174
86,114,922
MDQ6VXNlcjg2MTE0OTIy
User
false
huggingface/transformers
3,113,022,900
I_kwDOCUB6oc65jPW0
38,549
https://github.com/huggingface/transformers/issues/38549
https://api.github.com/repos/huggingface/transformers/issues/38549
Clarification on default top_k sampling parameter
Hi 🤗 team, I'm writing to inquire about the design choice to set the default top_k sampling parameter to 50 in the transformers library. https://github.com/huggingface/transformers/blob/f4fc42216cd56ab6b68270bf80d811614d8d59e4/src/transformers/generation/configuration_utils.py#L431 It appears top_k is the only sampling parameter with an opinionated default value, as others like top_p are typically set to a neutral value (e.g., 1.0). For consistency and to allow for more flexible default behavior (i.e., no top_k filtering by default), I would personally advocate for a default value of -1, similar to how vLLM handles its sampling parameters (vLLM [SamplingParams documentation](https://docs.vllm.ai/en/v0.8.4/api/inference_params.html#vllm.SamplingParams)). Could you please clarify the reasoning behind this specific default? Thank you for your time and consideration.
closed
completed
false
3
[]
[]
2025-06-03T08:42:36Z
2026-02-13T18:39:29Z
2025-07-12T08:02:49Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
MostHumble
56,939,432
MDQ6VXNlcjU2OTM5NDMy
User
false
huggingface/transformers
3,121,797,099
I_kwDOCUB6oc66Etfr
38,617
https://github.com/huggingface/transformers/issues/38617
https://api.github.com/repos/huggingface/transformers/issues/38617
ImportError: cannot import name 'layer_type_validation' from 'transformers.configuration_utils'
### System Info env: Name: transformers Version: 4.53.0.dev0 whe i called hte code bellowed: `model = AutoModelForImageTextToText.from_pretrained(model_id, local_files_only=True, **model_kwargs)` the model_id is medgemma model that from https://huggingface.co/models?other=medgemma. the ImportError: cannot import name 'layer_type_validation' from 'transformers.configuration_utils (/usr/local/lib/python3.11/dist-packages/transformers/configuration_utils.py' errror occurred. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction model = AutoModelForImageTextToText.from_pretrained(model_id, local_files_only=True, **model_kwargs) #note model_id visit on https://huggingface.co/models?other=medgemma ### Expected behavior run without error.
closed
completed
false
2
[ "bug" ]
[]
2025-06-05T16:09:03Z
2026-02-12T04:42:18Z
2025-06-15T07:56:37Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
Jacoobr
25,550,204
MDQ6VXNlcjI1NTUwMjA0
User
false
huggingface/transformers
3,135,466,321
I_kwDOCUB6oc6642tR
38,740
https://github.com/huggingface/transformers/issues/38740
https://api.github.com/repos/huggingface/transformers/issues/38740
[DOCS] Add `pruna` as optimization framework
### Feature request Have a section on Pruna AI within the documentation. We did [a similar PR for diffusers](https://github.com/huggingface/diffusers/pull/11688) and thought it would be nice to show how to optimize transformers models too. . ### Motivation Have a section on Pruna AI within the documentation to show how to optimize LLMs for inference. ### Your contribution We could do everything for the PR.
closed
completed
false
9
[ "Feature request" ]
[]
2025-06-11T04:52:33Z
2026-02-27T14:04:57Z
2026-02-27T14:04:50Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
davidberenstein1957
25,269,220
MDQ6VXNlcjI1MjY5MjIw
User
false
huggingface/transformers
3,202,815,590
I_kwDOCUB6oc6-5xZm
39,224
https://github.com/huggingface/transformers/issues/39224
https://api.github.com/repos/huggingface/transformers/issues/39224
transformers: FlaubertTokenizer: do_lowercase_and_remove_accent: make the logger warning actionable (don't only tell what's wrong, rather suggest what could be done about that)
Please, make the logger warning below *actionable* (**don't only tell what's wrong, rather suggest what could be done about that**): https://github.com/huggingface/transformers/blob/e6a8063ef1af16df964b644b07e1d17e96555d23/src/transformers/models/flaubert/tokenization_flaubert.py#L208-L209 Here's more context: https://github.com/huggingface/transformers/blob/e6a8063ef1af16df964b644b07e1d17e96555d23/src/transformers/models/flaubert/tokenization_flaubert.py#L205-L212 The community might appreciate. Thank you HF 🤗
open
null
false
21
[]
[]
2025-07-04T13:48:52Z
2026-03-18T08:41:18Z
null
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
kirisakow
11,773,604
MDQ6VXNlcjExNzczNjA0
User
false
huggingface/transformers
3,214,087,656
I_kwDOCUB6oc6_kxXo
39,290
https://github.com/huggingface/transformers/issues/39290
https://api.github.com/repos/huggingface/transformers/issues/39290
v4.53.0+ starts erroring with 'Gemma3TextConfig' object has no attribute 'sliding_window_pattern' with vLLM
### System Info - `transformers` version: 4.53.1 - Platform: Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.31 - Python version: 3.11.13 - Huggingface_hub version: 0.33.2 - Safetensors version: 0.5.3 - Accelerate version: not installed - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (accelerator?): 2.6.0+cu124 (CUDA) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: No - Using GPU in script?: Yes - GPU type: NVIDIA H100 80GB HBM3 ### Who can help? @ArthurZucker @Cyrilvallez ### Reproduction With vLLM <= 0.8.5.post1, upgrading transformers to 4.53.0 and above causes `AttributeError: 'Gemma3TextConfig' object has no attribute 'sliding_window_pattern'.`, likely because of the changes to Gemma 3 in this PR: https://github.com/huggingface/transformers/pull/37866. ```sh pip install transformers==4.53.1 # latest version, as long as >= 4.53.0 breaks pip install vllm==0.8.4 ``` ```python from vllm import LLM llm = LLM(model="google/gemma-3-12b-it") ``` <details> <summary>Error stacktrace</summary> <pre> ``` ERROR 07-08 22:51:23 [core.py:396] Traceback (most recent call last): ERROR 07-08 22:51:23 [core.py:396] File "/root/miniconda3/envs/transformers-issue/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 387, in run_engine_core ERROR 07-08 22:51:23 [core.py:396] engine_core = EngineCoreProc(*args, **kwargs) ERROR 07-08 22:51:23 [core.py:396] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-08 22:51:23 [core.py:396] File "/root/miniconda3/envs/transformers-issue/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 329, in __init__ ERROR 07-08 22:51:23 [core.py:396] super().__init__(vllm_config, executor_class, log_stats, ERROR 07-08 22:51:23 [core.py:396] File "/root/miniconda3/envs/transformers-issue/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 64, in __init__ ERROR 07-08 22:51:23 [core.py:396] self.model_executor = executor_class(vllm_config) ERROR 07-08 22:51:23 [core.py:396] ^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-08 22:51:23 [core.py:396] File "/root/miniconda3/envs/transformers-issue/lib/python3.11/site-packages/vllm/executor/executor_base.py", line 52, in __init__ ERROR 07-08 22:51:23 [core.py:396] self._init_executor() ERROR 07-08 22:51:23 [core.py:396] File "/root/miniconda3/envs/transformers-issue/lib/python3.11/site-packages/vllm/executor/uniproc_executor.py", line 47, in _init_executor ERROR 07-08 22:51:23 [core.py:396] self.collective_rpc("load_model") ERROR 07-08 22:51:23 [core.py:396] File "/root/miniconda3/envs/transformers-issue/lib/python3.11/site-packages/vllm/executor/uniproc_executor.py", line 56, in collective_rpc ERROR 07-08 22:51:23 [core.py:396] answer = run_method(self.driver_worker, method, args, kwargs) ERROR 07-08 22:51:23 [core.py:396] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-08 22:51:23 [core.py:396] File "/root/miniconda3/envs/transformers-issue/lib/python3.11/site-packages/vllm/utils.py", line 2456, in run_method ERROR 07-08 22:51:23 [core.py:396] return func(*args, **kwargs) ERROR 07-08 22:51:23 [core.py:396] ^^^^^^^^^^^^^^^^^^^^^ ERROR 07-08 22:51:23 [core.py:396] File "/root/miniconda3/envs/transformers-issue/lib/python3.11/site-packages/vllm/v1/worker/gpu_worker.py", line 162, in load_model ERROR 07-08 22:51:23 [core.py:396] self.model_runner.load_model() ERROR 07-08 22:51:23 [core.py:396] File "/root/miniconda3/envs/transformers-issue/lib/python3.11/site-packages/vllm/v1/worker/gpu_model_runner.py", line 1332, in load_model ERROR 07-08 22:51:23 [core.py:396] self.model = get_model(vllm_config=self.vllm_config) ERROR 07-08 22:51:23 [core.py:396] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-08 22:51:23 [core.py:396] File "/root/miniconda3/envs/transformers-issue/lib/python3.11/site-packages/vllm/model_executor/model_loader/__init__.py", line 14, in get_model ERROR 07-08 22:51:23 [core.py:396] return loader.load_model(vllm_config=vllm_config) ERROR 07-08 22:51:23 [core.py:396] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-08 22:51:23 [core.py:396] File "/root/miniconda3/envs/transformers-issue/lib/python3.11/site-packages/vllm/model_executor/model_loader/loader.py", line 452, in load_model ERROR 07-08 22:51:23 [core.py:396] model = _initialize_model(vllm_config=vllm_config) ERROR 07-08 22:51:23 [core.py:396] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-08 22:51:23 [core.py:396] File "/root/miniconda3/envs/transformers-issue/lib/python3.11/site-packages/vllm/model_executor/model_loader/loader.py", line 133, in _initialize_model ERROR 07-08 22:51:23 [core.py:396] return model_class(vllm_config=vllm_config, prefix=prefix) ERROR 07-08 22:51:23 [core.py:396] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-08 22:51:23 [core.py:396] File "/root/miniconda3/envs/transformers-issue/lib/python3.11/site-packages/vllm/model_executor/models/gemma3_mm.py", line 490, in __init__ ERROR 07-08 22:51:23 [core.py:396] self.language_model = init_vllm_registered_model( ERROR 07-08 22:51:23 [core.py:396] ^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-08 22:51:23 [core.py:396] File "/root/miniconda3/envs/transformers-issue/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 286, in init_vllm_registered_model ERROR 07-08 22:51:23 [core.py:396] return _initialize_model(vllm_config=vllm_config, prefix=prefix) ERROR 07-08 22:51:23 [core.py:396] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-08 22:51:23 [core.py:396] File "/root/miniconda3/envs/transformers-issue/lib/python3.11/site-packages/vllm/model_executor/model_loader/loader.py", line 133, in _initialize_model ERROR 07-08 22:51:23 [core.py:396] return model_class(vllm_config=vllm_config, prefix=prefix) ERROR 07-08 22:51:23 [core.py:396] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-08 22:51:23 [core.py:396] File "/root/miniconda3/envs/transformers-issue/lib/python3.11/site-packages/vllm/model_executor/models/gemma3.py", line 493, in __init__ ERROR 07-08 22:51:23 [core.py:396] self.model = Gemma3Model(vllm_config=vllm_config, ERROR 07-08 22:51:23 [core.py:396] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-08 22:51:23 [core.py:396] File "/root/miniconda3/envs/transformers-issue/lib/python3.11/site-packages/vllm/compilation/decorators.py", line 151, in __init__ ERROR 07-08 22:51:23 [core.py:396] old_init(self, vllm_config=vllm_config, prefix=prefix, **kwargs) ERROR 07-08 22:51:23 [core.py:396] File "/root/miniconda3/envs/transformers-issue/lib/python3.11/site-packages/vllm/model_executor/models/gemma3.py", line 360, in __init__ ERROR 07-08 22:51:23 [core.py:396] self.start_layer, self.end_layer, self.layers = make_layers( ERROR 07-08 22:51:23 [core.py:396] ^^^^^^^^^^^^ ERROR 07-08 22:51:23 [core.py:396] File "/root/miniconda3/envs/transformers-issue/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 609, in make_layers ERROR 07-08 22:51:23 [core.py:396] [PPMissingLayer() for _ in range(start_layer)] + [ ERROR 07-08 22:51:23 [core.py:396] ^ ERROR 07-08 22:51:23 [core.py:396] File "/root/miniconda3/envs/transformers-issue/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 610, in <listcomp> ERROR 07-08 22:51:23 [core.py:396] maybe_offload_to_cpu(layer_fn(prefix=f"{prefix}.{idx}")) ERROR 07-08 22:51:23 [core.py:396] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-08 22:51:23 [core.py:396] File "/root/miniconda3/envs/transformers-issue/lib/python3.11/site-packages/vllm/model_executor/models/gemma3.py", line 362, in <lambda> ERROR 07-08 22:51:23 [core.py:396] lambda prefix: Gemma3DecoderLayer( ERROR 07-08 22:51:23 [core.py:396] ^^^^^^^^^^^^^^^^^^^ ERROR 07-08 22:51:23 [core.py:396] File "/root/miniconda3/envs/transformers-issue/lib/python3.11/site-packages/vllm/model_executor/models/gemma3.py", line 288, in __init__ ERROR 07-08 22:51:23 [core.py:396] self.self_attn = Gemma3Attention( ERROR 07-08 22:51:23 [core.py:396] ^^^^^^^^^^^^^^^^ ERROR 07-08 22:51:23 [core.py:396] File "/root/miniconda3/envs/transformers-issue/lib/python3.11/site-packages/vllm/model_executor/models/gemma3.py", line 151, in __init__ ERROR 07-08 22:51:23 [core.py:396] (layer_idx + 1) % config.sliding_window_pattern)) ERROR 07-08 22:51:23 [core.py:396] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-08 22:51:23 [core.py:396] File "/root/miniconda3/envs/transformers-issue/lib/python3.11/site-packages/transformers/configuration_utils.py", line 209, in __getattribute__ ERROR 07-08 22:51:23 [core.py:396] return super().__getattribute__(key) ERROR 07-08 22:51:23 [core.py:396] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-08 22:51:23 [core.py:396] AttributeError: 'Gemma3TextConfig' object has no attribute 'sliding_window_pattern' ``` </pre> </details> Newer versions of vLLM also have quality issues particularly when upgrading transformers>=4.53.0 which are reported in https://github.com/vllm-project/vllm/issues/20341 . ### Expected behavior Should have the same behavior as transformers 4.52.4 + vLLM 0.8.4 ```python from vllm import LLM llm = LLM(model="google/gemma-3-12b-it") print(llm.generate("what is transformers")[0].outputs[0]) ``` ```python CompletionOutput(index=0, text='?>\n\nTransformers are a powerful type of neural network architecture that has revolutionized the', token_ids=[255999, 13765, 108, 214568, 659, 496, 8632, 1722, 529, 22823, 3707, 13217, 600, 815, 176839, 506], cumulative_logprob=None, logprobs=None, finish_reason=length, stop_reason=None) ```
closed
completed
false
6
[ "bug" ]
[]
2025-07-09T00:28:57Z
2026-03-09T07:04:55Z
2025-07-09T14:10:40Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
es94129
12,763,339
MDQ6VXNlcjEyNzYzMzM5
User
false
huggingface/transformers
3,228,950,168
I_kwDOCUB6oc7Add6Y
39,401
https://github.com/huggingface/transformers/issues/39401
https://api.github.com/repos/huggingface/transformers/issues/39401
Qwen3 tokenizer wrong offset_mapping
### System Info transformers 4.53.2, Ubuntu 22.04.4, python 3.11.13 ### Who can help? @ArthurZucker and @itazap There must be a problem with the `offset_mapping` of Qwen3 `tokenizer`. The starting point in the text for each token, except the first and the last, is one position behind. I compared it with the BERT's `tokenizer`, which produces what is expected: ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` sample_text='A girl is styling her hair.' bert_tokenizer = BertTokenizerFast.from_pretrained('google-bert/bert-base-cased') bert_encoding = bert_tokenizer( text=sample_text, add_special_tokens=False, return_offsets_mapping=True ) print(bert_encoding['offset_mapping']) qwen_tokenizer = AutoTokenizer.from_pretrained('Qwen/Qwen3-Embedding-0.6B') qwen_encoding = qwen_tokenizer( text=sample_text, add_special_tokens=False, return_offsets_mapping=True ) print(qwen_encoding['offset_mapping']) ``` ### Expected behavior [(0, 1), (2, 6), (7, 9), (10, 17), (18, 21), (22, 26), (26, 27)] [(0, 1), (1, 6), (6, 9), (9, 17), (17, 21), (21, 26), (26, 27)]
closed
completed
false
6
[ "bug" ]
[]
2025-07-14T14:21:08Z
2026-01-26T07:39:40Z
2025-07-16T09:59:35Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
contribcode
24,355,946
MDQ6VXNlcjI0MzU1OTQ2
User
false
huggingface/transformers
3,229,815,847
I_kwDOCUB6oc7AgxQn
39,404
https://github.com/huggingface/transformers/issues/39404
https://api.github.com/repos/huggingface/transformers/issues/39404
Whisper `return_language` with pipeline no longer working
### System Info Platform: Initially discovered on Nvidia. Can be reproduced on CPU and in Google Colab (see attached gist). - `transformers` version: 4.53.2 - Platform: Linux-6.6.87.2-microsoft-standard-WSL2-x86_64-with-glibc2.39 - Python version: 3.12.3 - Huggingface_hub version: 0.33.4 - Safetensors version: 0.5.3 - Accelerate version: 1.8.1 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (accelerator?): 2.7.1+cu126 (CUDA) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: No - Using GPU in script?: Yes and No. - GPU type: NVIDIA GeForce RTX 3090 ### Who can help? @eustlb @ArthurZucker ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction <s>Sometime between `transformers==4.46.3` and `transfomers==4.53.2 (latest as of now)`,</s> At #34135, the `return_language` argument for pipeline stopped working. The ending timestamp for the last word is also missing. Example (exported from Google Colab): https://gist.github.com/Metric-Void/ce2b9fe2faed0cdf6e5fd328599fd4c7 Code for testing: ``` import torch from transformers import pipeline from transformers.configuration_utils import PretrainedConfig pipeline = pipeline( task="automatic-speech-recognition", model="openai/whisper-tiny", torch_dtype=torch.float16, config=PretrainedConfig( attn_implementation="flash_attention_2" ) ) result = pipeline("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac", return_language=True, return_timestamps='word') result["chunks"] ``` Before (`transformers==4.46.3`): ``` [{'text': ' I', 'timestamp': (1.04, 1.36), 'language': 'english'}, {'text': ' have', 'timestamp': (1.36, 1.68), 'language': 'english'}, {'text': ' a', 'timestamp': (1.68, 1.94), 'language': 'english'}, {'text': ' dream.', 'timestamp': (1.94, 3.82), 'language': 'english'}, {'text': ' Good', 'timestamp': (3.82, 3.98), 'language': 'english'}, {'text': ' one', 'timestamp': (3.98, 4.16), 'language': 'english'}, {'text': ' day.', 'timestamp': (4.16, 6.4), 'language': 'english'}, {'text': ' This', 'timestamp': (6.4, 6.58), 'language': 'english'}, {'text': ' nation', 'timestamp': (6.58, 7.24), 'language': 'english'}, {'text': ' will', 'timestamp': (7.24, 7.82), 'language': 'english'}, {'text': ' rise', 'timestamp': (7.82, 8.3), 'language': 'english'}, {'text': ' up.', 'timestamp': (8.3, 10.3), 'language': 'english'}, {'text': ' Live', 'timestamp': (10.3, 10.56), 'language': 'english'}, {'text': ' out', 'timestamp': (10.56, 10.98), 'language': 'english'}, {'text': ' the', 'timestamp': (10.98, 11.02), 'language': 'english'}, {'text': ' true', 'timestamp': (11.02, 11.3), 'language': 'english'}, {'text': ' meaning', 'timestamp': (11.3, 11.6), 'language': 'english'}, {'text': ' of', 'timestamp': (11.6, 11.86), 'language': 'english'}, {'text': ' its', 'timestamp': (11.86, 12.08), 'language': 'english'}, {'text': ' dream.', 'timestamp': (12.08, 12.98), 'language': 'english'}] ``` After (`transfomers==4.53.2`): ``` [{'text': ' I', 'timestamp': (1.04, 1.36), 'language': None}, {'text': ' have', 'timestamp': (1.36, 1.68), 'language': None}, {'text': ' a', 'timestamp': (1.68, 1.94), 'language': None}, {'text': ' dream.', 'timestamp': (1.94, 3.82), 'language': None}, {'text': ' But', 'timestamp': (3.82, 3.96), 'language': None}, {'text': ' one', 'timestamp': (3.96, 4.18), 'language': None}, {'text': ' day,', 'timestamp': (4.18, 6.22), 'language': None}, {'text': ' this', 'timestamp': (6.22, 6.58), 'language': None}, {'text': ' nation', 'timestamp': (6.58, 7.22), 'language': None}, {'text': ' will', 'timestamp': (7.22, 7.82), 'language': None}, {'text': ' rise', 'timestamp': (7.82, 8.3), 'language': None}, {'text': ' up,', 'timestamp': (8.3, 10.2), 'language': None}, {'text': ' live', 'timestamp': (10.2, 10.56), 'language': None}, {'text': ' out', 'timestamp': (10.56, 10.98), 'language': None}, {'text': ' the', 'timestamp': (10.98, 11.02), 'language': None}, {'text': ' true', 'timestamp': (11.02, 11.3), 'language': None}, {'text': ' meaning', 'timestamp': (11.3, 11.6), 'language': None}, {'text': ' of', 'timestamp': (11.6, 11.86), 'language': None}, {'text': ' its', 'timestamp': (11.86, 12.08), 'language': None}, {'text': ' dream.', 'timestamp': (12.08, None), 'language': None}] ``` ### Expected behavior The old behaviour was correct. Maybe related: #21311, #21427, #25138, #27604, #29520, #31572
open
reopened
false
12
[ "bug", "Audio" ]
[ "eustlb" ]
2025-07-14T19:36:46Z
2026-03-24T13:00:45Z
null
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
Metric-Void
21,335,640
MDQ6VXNlcjIxMzM1NjQw
User
false
huggingface/transformers
3,265,628,633
I_kwDOCUB6oc7CpYnZ
39,692
https://github.com/huggingface/transformers/issues/39692
https://api.github.com/repos/huggingface/transformers/issues/39692
SigLIP2 documentation example has multiple errors (model/processor mismatch + quantization failure)
### System Info - `transformers` version: 4.54.0 - Platform: Windows-10-10.0.19045-SP0 - Python version: 3.10.6 - Huggingface_hub version: 0.34.1 - Safetensors version: 0.5.3 - Accelerate version: 1.9.0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (accelerator?): 2.7.1+cu128 (CUDA) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: no - Using GPU in script?: yes - GPU type: NVIDIA GeForce RTX 3090 ### Who can help? _No response_ ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The SigLIP2 documentation example has two issues: 1. The example loads mismatched model and processor versions 2. The 4-bit quantization fails with a dtype error Running the example from https://huggingface.co/docs/transformers/en/model_doc/siglip2: ```python import torch import requests from PIL import Image from transformers import AutoProcessor, AutoModel, BitsAndBytesConfig bnb_config = BitsAndBytesConfig(load_in_4bit=True) model = AutoModel.from_pretrained("google/siglip2-large-patch16-512", quantization_config=bnb_config, device_map="auto", attn_implementation="sdpa") processor = AutoProcessor.from_pretrained("google/siglip2-base-patch16-224") url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" image = Image.open(requests.get(url, stream=True).raw) candidate_labels = ["a Pallas cat", "a lion", "a Siberian tiger"] # follows the pipeline prompt template to get same results texts = [f'This is a photo of {label}.' for label in candidate_labels] # IMPORTANT: we pass `padding=max_length` and `max_length=64` since the model was trained with this inputs = processor(text=texts, images=image, padding="max_length", max_length=64, return_tensors="pt").to("cuda") with torch.no_grad(): outputs = model(**inputs) logits_per_image = outputs.logits_per_image probs = torch.sigmoid(logits_per_image) print(f"{probs[0][0]:.1%} that image 0 is '{candidate_labels[0]}'") ``` ## Issues 1. **Model/Processor Mismatch**: The example loads `siglip2-large-patch16-512` model but `siglip2-base-patch16-224` processor 2. **Quantization Error**: When run (even after fixing the mismatch), the code fails with: ``` RuntimeError: self and mat2 must have the same dtype, but got Half and Byte ``` ### Expected behavior 1. The example should use matching model and processor 2. The quantization should work as shown, or the documentation should note that quantization is not supported for SigLIP2 models
closed
completed
false
5
[ "bug" ]
[]
2025-07-26T13:25:19Z
2026-02-03T13:37:21Z
2026-02-03T13:37:21Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
david-littlefield
30,560,737
MDQ6VXNlcjMwNTYwNzM3
User
false
huggingface/transformers
3,307,563,945
I_kwDOCUB6oc7FJWup
40,070
https://github.com/huggingface/transformers/issues/40070
https://api.github.com/repos/huggingface/transformers/issues/40070
Transformer GGUF support philosophy / naive question
Hey there, I am a huge user of both transformers and diffusers and really love the work of the teams at HF. However something is not entirely clear to me regarding the GGUF support by transformers. GGUF main idea is to be a format that allows to run big models on machines with limited capabilities. With this in mind, in diffusers (or in comfyui and llama.cpp) I can load gguf files natively and they mostly just work using less vram than the original model. In transformers however while many models are supported in gguf format, they are loaded as gguf but then immediately dequantized back to fp16/32, which takes a lot of time and ultimately the same vram requirements of the full model. So I don't understand why someone would ever want to do it (at that point you are probably better off loading the full model and not wait for dequantization). It feels like I am missing something very obvious here, so I wanted to ask for guidance to the community 🙏 It's probably a stupid question but thanks a lot for taking the time to answer me :)
open
reopened
false
6
[ "Feature request", "GGUF" ]
[]
2025-08-10T13:14:42Z
2026-02-26T05:45:46Z
null
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
luke14free
166,602
MDQ6VXNlcjE2NjYwMg==
User
false
huggingface/transformers
3,353,286,554
I_kwDOCUB6oc7H3xea
40,444
https://github.com/huggingface/transformers/issues/40444
https://api.github.com/repos/huggingface/transformers/issues/40444
Finetuning Qwen2.5-VL with an IterableDataset with multiple images per prompt fails
### System Info - `transformers` version: 4.55.3 - Platform: Linux-5.4.0-1113-oracle-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.34.4 - Safetensors version: 0.5.3 - Accelerate version: 1.10.0 - Accelerate config: not found - DeepSpeed version: 0.16.9 - PyTorch version (accelerator?): 2.6.0+cu124 (CUDA) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: No - Using GPU in script?: Yes, for training - GPU type: NVIDIA RTX A5000 ### Who can help? @amyeroberts ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Here's a script that reproduces this issue: ``` from datasets import load_dataset from transformers import AutoModelForVision2Seq, AutoProcessor from peft import get_peft_model, LoraConfig dataset = load_dataset("unsloth/LaTeX_OCR", split = "train[:20]", streaming=False) model = AutoModelForVision2Seq.from_pretrained( "Qwen/Qwen2.5-VL-3B-Instruct" ) processor = AutoProcessor.from_pretrained( "Qwen/Qwen2.5-VL-3B-Instruct", ) peft_config = LoraConfig( r=16, lora_alpha=16, lora_dropout=0, bias="none", target_modules=["q_proj", "k_proj", "v_proj", "o_proj", "qkv_proj", "up_proj", "down_proj"], ) model = get_peft_model( model=model, peft_config=peft_config, ) instruction = "Write the LaTeX representation for this image." def convert_to_conversation(sample): conversation = [ { "role": "user", "content" : [ {"type" : "text", "text" : instruction}, {"type" : "image", "image" : sample["image"]}, {"type" : "image", "image" : sample["image"].resize((sample["image"].size[0]//2, sample["image"].size[1]//2))} ] }, { "role" : "assistant", "content" : [ {"type" : "text", "text" : sample["text"]} ] }, ] return { "messages" : conversation} def generator(): for sample in dataset: yield convert_to_conversation(sample) from datasets import IterableDataset converted_dataset = IterableDataset.from_generator(generator) from trl import SFTTrainer, SFTConfig from qwen_vl_utils import process_vision_info class DataCollator: def __init__(self, processor): self.processor = processor def __call__(self, examples): return self.process(examples) def process(self, examples): texts = [ processor.apply_chat_template(example["messages"], tokenize=False) for example in examples ] image_inputs = [ process_vision_info(example["messages"])[0] for example in examples ] model_inputs = processor( text=texts, images=image_inputs, return_tensors="pt", padding=True ) labels = model_inputs["input_ids"].clone() # mask padding tokens in labels labels[labels == processor.tokenizer.pad_token_id] = -100 image_tokens = [151652, 151653, 151655] # mask image token IDs in the labels for image_token_id in image_tokens: labels[labels == image_token_id] = -100 input_ids = model_inputs["input_ids"] attention_mask = model_inputs["attention_mask"] pixel_values = model_inputs["pixel_values"] image_grid_thw = model_inputs["image_grid_thw"] pixel_values = pixel_values.unsqueeze(0) return { "input_ids": input_ids, "attention_mask": attention_mask, "pixel_values": pixel_values, "image_grid_thw": image_grid_thw, "labels": labels } trainer = SFTTrainer( model = model, train_dataset = converted_dataset, data_collator = DataCollator(processor), args = SFTConfig( per_device_train_batch_size = 1, gradient_accumulation_steps = 8, warmup_steps = 5, max_steps = 30, # num_train_epochs = 1, # Set this instead of max_steps for full training runs learning_rate = 2e-4, logging_steps = 1, optim = "adamw_8bit", weight_decay = 0.01, lr_scheduler_type = "cosine_with_restarts", seed = 3407, # seed = 42, output_dir = "outputs", report_to = "none", # For Weights and Biases # You MUST put the below items for vision finetuning: remove_unused_columns = False, dataset_text_field = "", dataset_kwargs = {"skip_prepare_dataset": True}, # max_seq_length = 2048, ), ) trainer_stats = trainer.train() ``` ### Expected behavior I would expect that training works as it does with only one image per example. I wasn't sure whether to post this to the accelerate issues page or here given that this is occurring generally in training. I think I've traced this issue to the DataLoaderDispatcher whenever `self.slice_fn` is called (https://github.com/huggingface/accelerate/blob/5dd3d0b6901983bcb6de1b69687333a324382524/src/accelerate/data_loader.py#L880 and https://github.com/huggingface/accelerate/blob/5dd3d0b6901983bcb6de1b69687333a324382524/src/accelerate/data_loader.py#L911). For Qwen2.5-VL, in particular, it slices the `image_grid_thw` seemingly with the assumption that there is only one image per row (it slices by batch size). If this is the incorrect place to post this, I can move this issue to the accelerate repo.
closed
completed
false
14
[ "bug" ]
[]
2025-08-25T21:38:56Z
2026-02-22T10:05:33Z
2025-10-04T08:02:15Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
Infernaught
72,055,086
MDQ6VXNlcjcyMDU1MDg2
User
false
huggingface/transformers
3,406,911,750
I_kwDOCUB6oc7LEVkG
40,822
https://github.com/huggingface/transformers/issues/40822
https://api.github.com/repos/huggingface/transformers/issues/40822
Welcome v5
In this issue we share our plan for the upcoming version 5 of transformers. We've talked about version 5 for years and it's finally around the corner! We'll release a blog post announcing the focus of this release shortly, and wanted to share what we believe the process will look like over the coming weeks. - Soon, a new branch named `v4` will be created on the repository. It is from this branch that all v4-related updates will take place. Going forward, `main` will act as the version 5 branch. - For the next few weeks, every PR except breaking changes or significant refactors will be merged in both `main` and `v4`. - In a few weeks, we release what will likely be one of the last minor v4 releases (`v4.57.0`) - A few weeks later, we will release `v5`. We will aim to limit, as much as possible, the breaking changes within that release; but expect a migration guide as well as some specific breaking changes enabling much more versatile, performant, and cleaner code going forward. - Over the next few months, we'll continue patching the `v4` branch and will release patch updates. The v5 on pypi will be preceded by RC releases that we will share in this issue. Please subscribe to this issue to be updated, and let us know if you have thoughts about the outlined process above.
closed
completed
false
33
[ "for_v5?" ]
[ "LysandreJik", "ArthurZucker", "Cyrilvallez" ]
2025-09-11T14:49:29Z
2026-03-05T08:08:25Z
2026-03-05T08:08:25Z
MEMBER
null
20260325T173244Z
2026-03-25T17:32:44Z
LysandreJik
30,755,778
MDQ6VXNlcjMwNzU1Nzc4
User
false
huggingface/transformers
3,432,292,570
I_kwDOCUB6oc7MlKDa
40,990
https://github.com/huggingface/transformers/issues/40990
https://api.github.com/repos/huggingface/transformers/issues/40990
Extremely high perplexity on openai/gpt-oss-20b with WikiText-2 (raw)
### System Info - `transformers` version: 4.56.1 - Platform: Linux-6.5.0-1025-gcp-x86_64-with-glibc2.35 - Python version: 3.11.10 - Huggingface_hub version: 0.35.0 - Safetensors version: 0.6.2 - Accelerate version: 1.10.1 - Accelerate config: not found - DeepSpeed version: 0.17.3+cu126.pt27.v0.17.3.recogni2 - PyTorch version (accelerator?): 2.7.1+cu126 (CUDA) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: no - Using GPU in script?: yes - GPU type: NVIDIA A100-SXM4-40GB ### Who can help? @ArthurZucker @Cyrilvallez ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction Script: ```python #!/usr/bin/env python import math import torch from datasets import load_dataset from transformers import AutoModelForCausalLM, AutoTokenizer # Config MODEL_NAME = "openai/gpt-oss-20b" SPLIT = "test" # WikiText-2 (raw) test split CONTEXT_LENGTH = 2048 # evaluation window size DTYPE = torch.bfloat16 DEVICE_MAP = "auto" def main(): # Load tokenizer & model tok = AutoTokenizer.from_pretrained(MODEL_NAME) model = AutoModelForCausalLM.from_pretrained(MODEL_NAME, torch_dtype=DTYPE, device_map=DEVICE_MAP).eval() # Load dataset and build one long token stream (no special tokens) ds = load_dataset("wikitext", "wikitext-2-raw-v1", split=SPLIT) encs = tok([row["text"] for row in ds], add_special_tokens=False) flat_ids = [tid for seq in encs["input_ids"] for tid in seq] ids = torch.tensor(flat_ids, dtype=torch.long) # Keep first 10% of tokens n_keep = max(1, int(0.10 * ids.numel())) ids = ids[:n_keep] # Keep only full CONTEXT_LENGTH windows n_windows = ids.numel() // CONTEXT_LENGTH if n_windows == 0: raise ValueError(f"Not enough tokens ({ids.numel()}) for a single {CONTEXT_LENGTH}-token window.") ids = ids[: n_windows * CONTEXT_LENGTH].view(n_windows, CONTEXT_LENGTH) # Forward passes total_nll, total_tokens = 0.0, 0 with torch.no_grad(): for i in range(n_windows): x = ids[i : i + 1].to(model.device) # [1, L] out = model(input_ids=x, labels=x) # HF shifts labels internally contrib = x.size(1) - 1 # L-1 positions contribute total_nll += out.loss.item() * contrib # sum NLL total_tokens += contrib avg_nll = total_nll / total_tokens ppl = math.exp(avg_nll) # Detailed prints print("\n=== Repro Config ===") print(f"model_name: {MODEL_NAME}") print(f"split: {SPLIT}") print(f"context_length: {CONTEXT_LENGTH}") print(f"dtype: {DTYPE}") print(f"device_map: {DEVICE_MAP}") print(f"tokens_total: {ids.numel()}") print(f"num_segments: {n_windows}") print(f"bos/eos/pad: {tok.bos_token}/{tok.eos_token}/{tok.pad_token}") print("\n=== Results ===") print(f"tokens_scored: {total_tokens}") print(f"avg_nll: {avg_nll:.6f}") print(f"perplexity: {ppl:.3f}\n") if __name__ == "__main__": main() ``` Output: ``` === Repro Config === model_name: openai/gpt-oss-20b split: test context_length: 2048 dtype: torch.bfloat16 device_map: auto tokens_total: 28672 num_segments: 14 bos/eos/pad: <|startoftext|>/<|return|>/<|endoftext|> === Results === tokens_scored: 28658 avg_nll: 5.977535 perplexity: 394.467 ``` ### Expected behavior When evaluating `openai/gpt-oss-20b` on the WikiText-2 (raw) test split with a standard perplexity script, the reported perplexity is extremely high (~394). This is surprising, as a 20B parameter GPT-class model should normally achieve much lower perplexity on this benchmark. Clarification would be helpful to determine whether this behavior indicates a bug in the Transformers integration or if GPT-OSS models are not intended to be directly evaluated as causal LMs without special formatting. Note: The model card mentions a “harmony” chat template for usage, but it is unclear whether special formatting is required when performing perplexity evaluation on a corpus like WikiText.
closed
completed
false
6
[ "bug" ]
[]
2025-09-19T00:40:14Z
2026-03-01T11:07:19Z
2025-09-22T10:07:32Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
kuantuna
66,808,459
MDQ6VXNlcjY2ODA4NDU5
User
false
huggingface/transformers
3,443,922,628
I_kwDOCUB6oc7NRhbE
41,084
https://github.com/huggingface/transformers/issues/41084
https://api.github.com/repos/huggingface/transformers/issues/41084
Set Block Decoding
### Feature request Adding Set Block Decoding for Training and inference. https://huggingface.co/papers/2509.04185 ### Motivation Speeding up generation time with minimal additional fine-tuning. ### Your contribution Could implement a first draft.
open
null
false
7
[ "Feature request" ]
[]
2025-09-23T06:42:35Z
2026-02-16T14:07:24Z
null
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
davidmrau
20,661,461
MDQ6VXNlcjIwNjYxNDYx
User
false
huggingface/transformers
3,444,381,708
I_kwDOCUB6oc7NTRgM
41,093
https://github.com/huggingface/transformers/issues/41093
https://api.github.com/repos/huggingface/transformers/issues/41093
IndexError: The shape of the mask [1406] at index 0 does not match the shape of the indexed tensor [1405] at index 0
### System Info transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py”,在 get_rope_index 中 [rank3]: input_ids = input_ids[attention_mask[i] == 1] IndexError: The shape of the mask [1406] at index 0 does not match the shape of the indexed tensor [1405] at index 0 transformers==4.49.0 transformers==4.51.2 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Parameter Offload: Total persistent parameters: 848896 in 368 params --- DEBUGGING prompt_inputs --- Key: input_ids, Shape: torch.Size([1, 1411]) Key: attention_mask, Shape: torch.Size([1, 1411]) Key: pixel_values, Shape: torch.Size([5476, 1176]) Key: image_grid_thw, Shape: torch.Size([1, 3]) 0%| | 0/4 [00:00<?, ?it/s]--- DEBUGGING prompt_inputs --- Key: input_ids, Shape: torch.Size([1, 1402]) Key: attention_mask, Shape: torch.Size([1, 1402]) Key: pixel_values, Shape: torch.Size([5476, 1176]) Key: image_grid_thw, Shape: torch.Size([1, 3]) `generation_config` default values have been modified to match model-specific defaults: {'use_cache': False, 'temperature': 1e-06, 'repetition_penalty': 1.05, 'bos_token_id': 151643, 'eos_token_id': [151645, 151643]}. If this is not desired, please set these values explicitly. `generation_config` default values have been modified to match model-specific defaults: {'use_cache': False, 'temperature': 1e-06, 'repetition_penalty': 1.05, 'bos_token_id': 151643, 'eos_token_id': [151645, 151643]}. If this is not desired, please set these values explicitly. / conda-envs/searchlm_cu121/lib/python3.10/site-packages/torch/utils/checkpoint.py:87: UserWarning: None of the inputs have requires_grad=True. Gradients will be None warnings.warn( / conda-envs/searchlm_cu121/lib/python3.10/site-packages/torch/utils/checkpoint.py:87: UserWarning: None of the inputs have requires_grad=True. Gradients will be None warnings.warn( [rank0]: Traceback (most recent call last): [rank0]: File "/ unified/UnifiedReward-main/UnifiedReward-Think/src/open_r1/grpo.py", line 337, in <module> [rank0]: main(script_args, training_args, model_args) [rank0]: File "/ unified/UnifiedReward-main/UnifiedReward-Think/src/open_r1/grpo.py", line 326, in main [rank0]: trainer.train() [rank0]: File "/ conda-envs/searchlm_cu121/lib/python3.10/site-packages/transformers/trainer.py", line 2237, in train [rank0]: return inner_training_loop( [rank0]: File "/ conda-envs/searchlm_cu121/lib/python3.10/site-packages/transformers/trainer.py", line 2578, in _inner_training_loop [rank0]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch) [rank0]: File "/ conda-envs/searchlm_cu121/lib/python3.10/site-packages/transformers/trainer.py", line 3792, in training_step [rank0]: loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch) [rank0]: File "/ unified/UnifiedReward-main/UnifiedReward-Think/src/open_r1/trainer/grpo_trainer.py", line 495, in compute_loss [rank0]: prompt_completion_ids = unwrapped_model.generate(**prompt_inputs, generation_config=self.generation_config) [rank0]: File "/ conda-envs/searchlm_cu121/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context [rank0]: return func(*args, **kwargs) [rank0]: File "/ conda-envs/searchlm_cu121/lib/python3.10/site-packages/transformers/generation/utils.py", line 2633, in generate [rank0]: result = self._sample( [rank0]: File "/ conda-envs/searchlm_cu121/lib/python3.10/site-packages/transformers/generation/utils.py", line 3607, in _sample [rank0]: model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs) [rank0]: File "/ conda-envs/searchlm_cu121/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1561, in prepare_inputs_for_generation [rank0]: vision_positions, rope_deltas = self.model.get_rope_index( [rank0]: File "/ conda-envs/searchlm_cu121/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1057, in get_rope_index [rank0]: input_ids = input_ids[attention_mask[i] == 1] [rank0]: IndexError: The shape of the mask [1406] at index 0 does not match the shape of the indexed tensor [1405] at index 0 0%| | 0/4 [00:06<?, ?it/s] [2025-09-23 05:40:36,349] [INFO] [launch.py:319:sigkill_handler] Killing subprocess 1295796 [2025-09-23 05:40:36,350] [INFO] [launch.py:319:sigkill_handler] Killing subprocess 1295797 [2025-09-23 05:40:36,565] [ERROR] [launch.py:325:sigkill_handler] ['/ conda-envs/searchlm_cu121/bin/python3.10', '-u', 'src/open_r1/grpo.py', '--local_rank=1', '--deepspeed', 'scripts/zero3.json', '--ddp_timeout', '180000000', '--output_dir', './checkpoints/UnifiedReward-Think-qwen-GRPO', '--model_name_or_path', '/ model/UnifiedReward-qwen-7b', '--dataset_name', '/ unified/UnifiedReward-main/UnifiedReward-Think/dataset/HPD/HPD_train_data_qwen1.json', '--max_prompt_length', '2048', '--max_completion_length', '1024', '--num_generations', '2', '--per_device_train_batch_size', '1', '--gradient_accumulation_steps', '1', '--learning_rate', '1e-6', '--logging_steps', '1', '--bf16', 'True', '--torch_dtype', 'bfloat16', '--report_to', 'none', '--gradient_checkpointing', 'true', '--attn_implementation', 'eager', '--max_pixels', '147456', '--save_steps', '40', '--save_total_limit', '8', '--save_only_model', 'false', '--num_train_epochs', '2'] exits with return code = 1 ### Expected behavior It seems to be a tranformers version issue. Could you help take a look
closed
completed
false
14
[ "bug", "Vision" ]
[]
2025-09-23T09:11:35Z
2026-03-08T14:58:20Z
2025-10-06T08:56:31Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
wyn1015
201,194,623
U_kgDOC_38fw
User
false
huggingface/transformers
3,468,498,317
I_kwDOCUB6oc7OvRWN
41,211
https://github.com/huggingface/transformers/issues/41211
https://api.github.com/repos/huggingface/transformers/issues/41211
Add DEIMv2
### Model description It would be nice to integrate DEIMv2, a new state-of-the-art model for real-time object detection based on DINOv3. The weights are released under Apache 2.0. Related thread: https://github.com/Intellindust-AI-Lab/DEIMv2/issues/20 ### Open source status - [x] The model implementation is available - [x] The model weights are available ### Provide useful links for the implementation Code: https://github.com/Intellindust-AI-Lab/DEIMv2 Weights (on Google Drive for now): https://github.com/Intellindust-AI-Lab/DEIMv2?tab=readme-ov-file#1-model-zoo Ideally, the [AutoBackbone API](https://huggingface.co/docs/transformers/main_classes/backbones) can be leveraged to not having to re-implement the entire DINOv3 backbone in `modular_deimv2.py` and `modeling_deimv2.py`. See an example of how this is leveraged for DETR [here](https://github.com/huggingface/transformers/blob/59035fd0e1876f9e526488b61fe43ff8829059f6/src/transformers/models/detr/modeling_detr.py#L280).
open
null
false
6
[ "New model" ]
[]
2025-09-30T09:43:07Z
2026-03-01T08:52:35Z
null
MEMBER
null
20260325T173244Z
2026-03-25T17:32:44Z
NielsRogge
48,327,001
MDQ6VXNlcjQ4MzI3MDAx
User
false
huggingface/transformers
3,511,681,654
I_kwDOCUB6oc7RUAJ2
41,553
https://github.com/huggingface/transformers/issues/41553
https://api.github.com/repos/huggingface/transformers/issues/41553
Bad error message for AutoTokenizer loading Voxtral
### System Info Getting the following unhelpful error when trying to load Voxtral's tokenizer with `AutoTokenizer` without `mistral-common` installed. ``` ../../.conda/envs/et_new/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:1144: in from_pretrained return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) ../../.conda/envs/et_new/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:2070: in from_pretrained return cls._from_pretrained( ../../.conda/envs/et_new/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:2108: in _from_pretrained slow_tokenizer = (cls.slow_tokenizer_class)._from_pretrained( ../../.conda/envs/et_new/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:2316: in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) ../../.conda/envs/et_new/lib/python3.10/site-packages/transformers/models/llama/tokenization_llama.py:171: in __init__ self.sp_model = self.get_spm_processor(kwargs.pop("from_slow", False)) ../../.conda/envs/et_new/lib/python3.10/site-packages/transformers/models/llama/tokenization_llama.py:198: in get_spm_processor tokenizer.Load(self.vocab_file) ../../.conda/envs/et_new/lib/python3.10/site-packages/sentencepiece/__init__.py:961: in Load return self.LoadFromFile(model_file) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <sentencepiece.SentencePieceProcessor; proxy of <Swig Object of type 'sentencepiece::SentencePieceProcessor *' at 0x7f5e7e25f780> >, arg = None def LoadFromFile(self, arg): > return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg) E TypeError: not a string ../../.conda/envs/et_new/lib/python3.10/site-packages/sentencepiece/__init__.py:316: TypeError ``` ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. `pip install transformers` 2. `from transformers import AutoTokenizer; AutoTokenizer.from_pretrained("mistralai/Voxtral-Mini-3B-2507")` ### Expected behavior A clearer error message, suggesting to `pip install mistral-common`
closed
completed
false
21
[ "Good First Issue", "bug" ]
[]
2025-10-13T22:37:26Z
2026-02-13T17:30:05Z
2025-11-24T12:16:54Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
jackzhxng
32,371,937
MDQ6VXNlcjMyMzcxOTM3
User
false
huggingface/transformers
3,518,780,612
I_kwDOCUB6oc7RvFTE
41,628
https://github.com/huggingface/transformers/issues/41628
https://api.github.com/repos/huggingface/transformers/issues/41628
Cannot import name 'AutoImageProcessor' from 'transformers'
### System Info Intel CPU Nvidia 3090 ubuntu 22.04 python 3.10.12 transformers=5.0.0.dev0 (installed from the official git repo) ### PS: It's also tested with transformers=4.57.1, which is installed using "pip install", the same error persisted while executing "from transformers import AutoImageProcessor, AutoModel". ### Who can help? _No response_ ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. in any python script: from transformers import AutoImageProcessor, AutoModel ### Expected behavior it shouldn't raise the error: Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: cannot import name 'AutoImageProcessor' from 'transformers' (/usr/local/lib/python3.10/dist-packages/transformers/__init__.py)
closed
completed
false
6
[ "bug" ]
[]
2025-10-15T16:29:20Z
2026-02-26T18:36:13Z
2025-10-16T12:37:07Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
Pittmann-XIE
103,981,664
U_kgDOBjKiYA
User
false
huggingface/transformers
3,528,715,552
I_kwDOCUB6oc7SU-0g
41,720
https://github.com/huggingface/transformers/issues/41720
https://api.github.com/repos/huggingface/transformers/issues/41720
Qwen3 with auto device mapping fails due to cudaErrorAssert on A800
### System Info - `transformers` version: 4.57.1 - Platform: Linux-4.19.90-2107.6.0.0192.8.oe1.bclinux.x86_64-x86_64-with-glibc2.35 - Python version: 3.12.12 - Huggingface_hub version: 0.35.3 - Safetensors version: 0.6.2 - Accelerate version: 1.10.1 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (accelerator?): 2.9.0+cu128 (CUDA) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: no - Using GPU in script?: yes - GPU type: NVIDIA A800 80GB PCIe ### Who can help? @gante @ArthurZucker @Cyrilvallez ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I use the following python script (a very simple one). ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( "/root/autodl-fs/checkpoints/Qwen/Qwen3-32B", trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( "/root/autodl-fs/checkpoints/Qwen/Qwen3-32B", dtype="auto", device_map="auto", trust_remote_code=True, attn_implementation="eager" #flash_attention_2 ) prompt = "Output a quick sort algorithm" messages = [ {"role": "user", "content": prompt}, ] template_prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True, enable_thinking=False) inputs = tokenizer(template_prompt, return_tensors="pt").to(model.device) with torch.no_grad(): outputs = model.generate( **inputs, max_new_tokens=100, do_sample=True ) generated_ids = outputs[:, inputs.input_ids.shape[1]:] generated_text = tokenizer.decode(generated_ids[0], skip_special_tokens=True) print(generated_text) ``` But this fails due to an assertation: ``` /pytorch/aten/src/ATen/native/cuda/TensorCompare.cu:112: _assert_async_cuda_kernel: block: [0,0,0], thread: [0,0,0] Assertion `probability tensor contains either `inf`, `nan` or element < 0` failed. Traceback (most recent call last): File "/root/ACBR/test_transformer.py", line 24, in <module> outputs = model.generate( ^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/your_env_name/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 120, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/your_env_name/lib/python3.12/site-packages/transformers/generation/utils.py", line 2564, in generate result = decoding_method( ^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/your_env_name/lib/python3.12/site-packages/transformers/generation/utils.py", line 2829, in _sample next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ torch.AcceleratorError: CUDA error: device-side assert triggered Search for `cudaErrorAssert' in https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html for more information. Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. ``` When I load the model with two cards: `CUDA_VISIBLE_DEVICES=0,1 python test_transformer.py`, the error occurs. But when I load the model only with a single card: `CUDA_VISIBLE_DEVICES=0 python test_transformer.py`, the beahviors are normal. ### Expected behavior Just output the text.
closed
completed
false
7
[ "bug" ]
[]
2025-10-18T11:50:43Z
2026-03-12T05:43:23Z
2026-01-05T08:03:26Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
guosyjlu
69,756,483
MDQ6VXNlcjY5NzU2NDgz
User
false
huggingface/transformers
3,532,707,392
I_kwDOCUB6oc7SkNZA
41,749
https://github.com/huggingface/transformers/issues/41749
https://api.github.com/repos/huggingface/transformers/issues/41749
`_get_num_multimodal_tokens` is not implemented for model `mllama`
vLLM 0.11’s Transformers-backend expects the HF processor to implement a method called `_get_num_multimodal_tokens` which is [not implemented for mllama](https://github.com/huggingface/transformers/blob/main/src/transformers/models/mllama/processing_mllama.py) in `transformers 4.57.1`. Because of this, `vllm serve meta-llama/Llama-3.2-11B-Vision` fails on `vllm 0.11.0`. It works on `vllm 0.10.2`. The error is `MllamaProcessor' object has no attribute '_get_num_multimodal_tokens`. ## Related https://github.com/vllm-project/vllm/issues/27198 ### Who can help? Tagging @yonigozlan @molbap @zucchini-nlp for input — happy to implement the method if no one’s on it yet, and I’d appreciate your guidance. ### Reproduction ``` from transformers import AutoProcessor proc = AutoProcessor.from_pretrained("meta-llama/Llama-3.2-11B-Vision-Instruct") print(hasattr(proc, "_get_num_multimodal_tokens")) # should be True but not ``` ### Expected behavior Implement `_get_num_multimodal_tokens` as it is implemented for other models in `./src/transformers/models/` (like `gemma3`. ## Useful links * https://huggingface.co/docs/transformers/main/en/transformers_as_backend#multimodal-models * https://blog.vllm.ai/2025/04/11/transformers-backend.html
closed
completed
false
4
[ "bug" ]
[]
2025-10-20T14:38:22Z
2026-01-26T10:05:20Z
2025-10-21T09:58:49Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
mrtpk
8,076,245
MDQ6VXNlcjgwNzYyNDU=
User
false
huggingface/transformers
3,535,832,788
I_kwDOCUB6oc7SwIbU
41,762
https://github.com/huggingface/transformers/issues/41762
https://api.github.com/repos/huggingface/transformers/issues/41762
`IndexError: index 0 is out of bounds for dimension 0 with size 0` when loading Gemma3ForConditionalGeneration with DeepSpeed ZeRO-3
### System Info transformers=4.57.1 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction model = Gemma3ForConditionalGeneration.from_pretrained(model_args.model_local_path, torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2", # device_map='cuda:3', ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=train_dataset, eval_dataset=eval_dataset, ) ### Expected behavior When I try to **pre-train / fine-tune** `Gemma3ForConditionalGeneration` with **DeepSpeed ZeRO-3** , the job crashes **immediately after the model is initialized** with the following traceback: [2025-10-21 09:31:12,879] [INFO] [real_accelerator.py:254:get_accelerator] Setting ds_accelerator to cuda (auto detect) [2025-10-21 09:31:14,455] [INFO] [config.py:744:__init__] Config mesh_device None world_size = 1 [2025-10-21 09:31:14,456] [INFO] [comm.py:675:init_distributed] cdb=None [2025-10-21 09:31:14,456] [INFO] [comm.py:690:init_distributed] Not using the DeepSpeed or dist launchers, attempting to detect MPI environment... [2025-10-21 09:31:15,197] [INFO] [comm.py:745:mpi_discovery] Discovered MPI settings of world_rank=0, local_rank=0, world_size=1, master_addr=10.169.115.149, master_port=29500 [2025-10-21 09:31:15,197] [INFO] [comm.py:706:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl [2025-10-21 09:31:16,938] [INFO] [partition_parameters.py:348:__exit__] finished initializing model - num_params = 884, num_elems = 4.97B [rank0]: Traceback (most recent call last): [rank0]: File "/data/scy/SCY/SonoVLM_V2/deepspeed_train.py", line 519, in <module> [rank0]: train() [rank0]: File "/data/scy/SCY/SonoVLM_V2/deepspeed_train.py", line 355, in train [rank0]: model = Gemma3ForConditionalGeneration.from_pretrained(model_args.model_local_path, [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/data/scy/anaconda3/envs/pytorch_2.7.1/lib/python3.12/site-packages/transformers/modeling_utils.py", line 277, in _wrapper [rank0]: return func(*args, **kwargs) [rank0]: ^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/data/scy/anaconda3/envs/pytorch_2.7.1/lib/python3.12/site-packages/transformers/modeling_utils.py", line 5048, in from_pretrained [rank0]: ) = cls._load_pretrained_model( [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/data/scy/anaconda3/envs/pytorch_2.7.1/lib/python3.12/site-packages/transformers/modeling_utils.py", line 5362, in _load_pretrained_model [rank0]: model._initialize_missing_keys(missing_keys + mismatched_keys, is_quantized) [rank0]: File "/data/scy/anaconda3/envs/pytorch_2.7.1/lib/python3.12/site-packages/transformers/modeling_utils.py", line 5892, in _initialize_missing_keys [rank0]: self.initialize_weights() [rank0]: File "/data/scy/anaconda3/envs/pytorch_2.7.1/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context [rank0]: return func(*args, **kwargs) [rank0]: ^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/data/scy/anaconda3/envs/pytorch_2.7.1/lib/python3.12/site-packages/transformers/modeling_utils.py", line 2984, in initialize_weights [rank0]: self.smart_apply(self._initialize_weights) [rank0]: File "/data/scy/anaconda3/envs/pytorch_2.7.1/lib/python3.12/site-packages/transformers/modeling_utils.py", line 2975, in smart_apply [rank0]: module.smart_apply(module._initialize_weights) [rank0]: File "/data/scy/anaconda3/envs/pytorch_2.7.1/lib/python3.12/site-packages/transformers/modeling_utils.py", line 2975, in smart_apply [rank0]: module.smart_apply(module._initialize_weights) [rank0]: File "/data/scy/anaconda3/envs/pytorch_2.7.1/lib/python3.12/site-packages/transformers/modeling_utils.py", line 2977, in smart_apply [rank0]: module.smart_apply(fn) [rank0]: File "/data/scy/anaconda3/envs/pytorch_2.7.1/lib/python3.12/site-packages/transformers/modeling_utils.py", line 2978, in smart_apply [rank0]: fn(self) [rank0]: File "/data/scy/anaconda3/envs/pytorch_2.7.1/lib/python3.12/site-packages/transformers/modeling_utils.py", line 2952, in _initialize_weights [rank0]: self._init_weights(module) [rank0]: File "/data/scy/anaconda3/envs/pytorch_2.7.1/lib/python3.12/site-packages/transformers/models/gemma3/modeling_gemma3.py", line 434, in _init_weights [rank0]: super()._init_weights(module) [rank0]: File "/data/scy/anaconda3/envs/pytorch_2.7.1/lib/python3.12/site-packages/transformers/modeling_utils.py", line 2929, in _init_weights [rank0]: module.weight.data[module.padding_idx].zero_() [rank0]: ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^ [rank0]: IndexError: index 0 is out of bounds for dimension 0 with size 0 [rank0]:[W1021 09:31:18.622273376 ProcessGroupNCCL.cpp:1479] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
closed
completed
false
8
[ "bug" ]
[]
2025-10-21T09:58:58Z
2026-02-20T15:36:18Z
2025-10-22T15:10:46Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
Asunatan
105,210,894
U_kgDOBkVkDg
User
false
huggingface/transformers
3,548,058,215
I_kwDOCUB6oc7TexJn
41,842
https://github.com/huggingface/transformers/issues/41842
https://api.github.com/repos/huggingface/transformers/issues/41842
Incorrect usage of `num_items_in_batch`?
It seems that `num_items_in_batch` is computed for all items in the batch [here](https://github.com/huggingface/transformers/blob/9c20660138830ca362533551ca978c27b48283a1/src/transformers/trainer.py#L2430). However, when loss is computed in the `training_step`, it is computed for each input in the batch one by one. Does it make sense to pass `num_items_in_batch` (for the whole batch) or should that number be for that particular input only? Right now, the entire batch's `num_items_in_batch` is used [here](https://github.com/huggingface/transformers/blob/9c20660138830ca362533551ca978c27b48283a1/src/transformers/trainer.py#L2486).
closed
completed
false
3
[]
[]
2025-10-24T07:36:00Z
2026-03-09T14:02:44Z
2025-12-01T08:02:48Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
gohar94
6,470,801
MDQ6VXNlcjY0NzA4MDE=
User
false
huggingface/transformers
3,570,611,821
I_kwDOCUB6oc7U0zZt
41,950
https://github.com/huggingface/transformers/issues/41950
https://api.github.com/repos/huggingface/transformers/issues/41950
video-classification pipeline looks for image processors
### System Info 4.57.1 ### Who can help? @zucchini-nlp I can take a stab at this sometime ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python from transformers import pipeline, infer_device import torch device = infer_device() checkpoint = "facebook/vjepa2-vitl-fpc64-256" pipe = pipeline("video-classification", model=checkpoint, device=device) ``` ### Expected behavior Full trace: ``` --------------------------------------------------------------------------- IndexError Traceback (most recent call last) [/usr/local/lib/python3.12/dist-packages/transformers/image_processing_base.py](https://localhost:8080/#) in get_image_processor_dict(cls, pretrained_model_name_or_path, **kwargs) 353 ] --> 354 resolved_image_processor_file = resolved_image_processor_files[0] 355 except OSError: IndexError: list index out of range During handling of the above exception, another exception occurred: OSError Traceback (most recent call last) 5 frames [/usr/local/lib/python3.12/dist-packages/transformers/image_processing_base.py](https://localhost:8080/#) in get_image_processor_dict(cls, pretrained_model_name_or_path, **kwargs) 359 except Exception: 360 # For any other exception, we throw a generic error. --> 361 raise OSError( 362 f"Can't load image processor for '{pretrained_model_name_or_path}'. If you were trying to load" 363 " it from 'https://huggingface.co/models', make sure you don't have a local directory with the" OSError: Can't load image processor for 'facebook/vjepa2-vitl-fpc64-256'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'facebook/vjepa2-vitl-fpc64-256' is the correct path to a directory containing a preprocessor_config.json file ```
open
null
false
6
[ "WIP", "bug" ]
[]
2025-10-30T12:45:06Z
2026-02-19T10:56:02Z
null
MEMBER
null
20260325T173244Z
2026-03-25T17:32:44Z
merveenoyan
53,175,384
MDQ6VXNlcjUzMTc1Mzg0
User
false
huggingface/transformers
3,590,608,152
I_kwDOCUB6oc7WBFUY
42,032
https://github.com/huggingface/transformers/issues/42032
https://api.github.com/repos/huggingface/transformers/issues/42032
ValueError: Unrecognized configuration class <class 'transformers.models.qwen3_omni_moe.configuration_qwen3_omni_moe.Qwen3OmniMoeConfig'> for this kind of AutoModel: AutoModel.
### System Info I have started testing the Qwen3-Omni model and at that time there was transformers version 4.56.0 available which had the issues to the model. With the commits and bugs fixation for transformers version 4.57.0 it got fixed but that commit was available on git. Since there is transformer update on the pip and also on the git, there is again issues to run the model. The issues are mentioned below. I have tried all the versions and all have same like issues same error while loading. _Since there is update in the transformer version, there is error which is given below when we load the model:_ `Starting to load model ../Qwen3-Omni-30B-A3B-Thinking... (VllmWorkerProcess pid=781008) WARNING 11-05 11:35:04 [utils.py:196] TransformersForMultimodalLM has no vLLM implementation, falling back to Transformers implementation. Some features may not be supported and performance may not be optimal. (VllmWorkerProcess pid=781008) INFO 11-05 11:35:04 [transformers.py:400] Using Transformers backend. (VllmWorkerProcess pid=781007) WARNING 11-05 11:35:04 [utils.py:196] TransformersForMultimodalLM has no vLLM implementation, falling back to Transformers implementation. Some features may not be supported and performance may not be optimal. (VllmWorkerProcess pid=781007) INFO 11-05 11:35:04 [transformers.py:400] Using Transformers backend. (VllmWorkerProcess pid=781006) WARNING 11-05 11:35:04 [utils.py:196] TransformersForMultimodalLM has no vLLM implementation, falling back to Transformers implementation. Some features may not be supported and performance may not be optimal. (VllmWorkerProcess pid=781006) INFO 11-05 11:35:04 [transformers.py:400] Using Transformers backend. WARNING 11-05 11:35:04 [utils.py:196] TransformersForMultimodalLM has no vLLM implementation, falling back to Transformers implementation. Some features may not be supported and performance may not be optimal. INFO 11-05 11:35:04 [transformers.py:400] Using Transformers backend. (VllmWorkerProcess pid=781007) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] Exception in worker VllmWorkerProcess while processing method load_model. (VllmWorkerProcess pid=781007) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] Traceback (most recent call last): (VllmWorkerProcess pid=781007) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] File "/Qwen3-Omni/vllm/executor/multiproc_worker_utils.py", line 226, in _run_worker_process (VllmWorkerProcess pid=781007) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] output = run_method(worker, method, args, kwargs) (VllmWorkerProcess pid=781007) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] File "/Qwen3-Omni/vllm/utils/__init__.py", line 3007, in run_method (VllmWorkerProcess pid=781007) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] return func(*args, **kwargs) (VllmWorkerProcess pid=781007) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] File "/Qwen3-Omni/vllm/worker/worker.py", line 211, in load_model (VllmWorkerProcess pid=781007) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] self.model_runner.load_model() (VllmWorkerProcess pid=781007) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] File "/Qwen3-Omni/vllm/worker/model_runner.py", line 1083, in load_model (VllmWorkerProcess pid=781007) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] self.model = get_model(vllm_config=self.vllm_config) (VllmWorkerProcess pid=781007) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] File "/Qwen3-Omni/vllm/model_executor/model_loader/__init__.py", line 118, in get_model (VllmWorkerProcess pid=781007) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] return loader.load_model(vllm_config=vllm_config, (VllmWorkerProcess pid=781007) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] File "/Qwen3-Omni/vllm/model_executor/model_loader/base_loader.py", line 44, in load_model (VllmWorkerProcess pid=781007) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] model = initialize_model(vllm_config=vllm_config, (VllmWorkerProcess pid=781007) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] File "/Qwen3-Omni/vllm/model_executor/model_loader/utils.py", line 63, in initialize_model (VllmWorkerProcess pid=781007) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] return model_class(vllm_config=vllm_config, prefix=prefix) (VllmWorkerProcess pid=781007) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] File "/Qwen3-Omni/vllm/model_executor/models/transformers.py", line 737, in __init__ (VllmWorkerProcess pid=781007) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] super().__init__(vllm_config=vllm_config, prefix=prefix) (VllmWorkerProcess pid=781007) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] File "/Qwen3-Omni/vllm/compilation/decorators.py", line 183, in __init__ (VllmWorkerProcess pid=781007) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] old_init(self, vllm_config=vllm_config, prefix=prefix, **kwargs) (VllmWorkerProcess pid=781007) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] File "/Qwen3-Omni/vllm/model_executor/models/transformers.py", line 661, in __init__ (VllmWorkerProcess pid=781007) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] super().__init__(vllm_config=vllm_config, prefix=prefix) (VllmWorkerProcess pid=781007) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] File "/Qwen3-Omni/vllm/model_executor/models/transformers.py", line 423, in __init__ (VllmWorkerProcess pid=781007) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] self.model: PreTrainedModel = AutoModel.from_config( (VllmWorkerProcess pid=781007) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] File "/anaconda3/envs/vis3/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 458, in from_config (VllmWorkerProcess pid=781007) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] raise ValueError( (VllmWorkerProcess pid=781007) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] ValueError: Unrecognized configuration class <class 'transformers.models.qwen3_omni_moe.configuration_qwen3_omni_moe.Qwen3OmniMoeConfig'> for this kind of AutoModel: AutoModel. (VllmWorkerProcess pid=781007) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] Model type should be one of Aimv2Config, Aimv2VisionConfig, AlbertConfig, AlignConfig, AltCLIPConfig, ApertusConfig, ArceeConfig, AriaConfig, AriaTextConfig, ASTConfig, AutoformerConfig, AyaVisionConfig, BambaConfig, BarkConfig, BartConfig, BeitConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BitConfig, BitNetConfig, BlenderbotConfig, BlenderbotSmallConfig, BlipConfig, Blip2Config, Blip2QFormerConfig, BloomConfig, BltConfig, BridgeTowerConfig, BrosConfig, CamembertConfig, CanineConfig, ChameleonConfig, ChineseCLIPConfig, ChineseCLIPVisionConfig, ClapConfig, CLIPConfig, CLIPTextConfig, CLIPVisionConfig, CLIPSegConfig, ClvpConfig, LlamaConfig, CodeGenConfig, CohereConfig, Cohere2Config, Cohere2VisionConfig, ConditionalDetrConfig, ConvBertConfig, ConvNextConfig, ConvNextV2Config, CpmAntConfig, CsmConfig, CTRLConfig, CvtConfig, DFineConfig, DabDetrConfig, DacConfig, Data2VecAudioConfig, Data2VecTextConfig, Data2VecVisionConfig, DbrxConfig, DebertaConfig, DebertaV2Config, DecisionTransformerConfig, DeepseekV2Config, DeepseekV3Config, DeepseekVLConfig, DeepseekVLHybridConfig, DeformableDetrConfig, DeiTConfig, DepthProConfig, DetaConfig, DetrConfig, DiaConfig, DiffLlamaConfig, DinatConfig, Dinov2Config, Dinov2WithRegistersConfig, DINOv3ConvNextConfig, DINOv3ViTConfig, DistilBertConfig, DogeConfig, DonutSwinConfig, Dots1Config, DPRConfig, DPTConfig, EdgeTamConfig, EdgeTamVideoConfig, EdgeTamVisionConfig, EfficientFormerConfig, EfficientLoFTRConfig, EfficientNetConfig, ElectraConfig, Emu3Config, EncodecConfig, ErnieConfig, Ernie4_5Config, Ernie4_5_MoeConfig, ErnieMConfig, EsmConfig, EvollaConfig, Exaone4Config, FalconConfig, FalconH1Config, FalconMambaConfig, FastSpeech2ConformerConfig, FastSpeech2ConformerWithHifiGanConfig, FlaubertConfig, FlavaConfig, FlexOlmoConfig, Florence2Config, FNetConfig, FocalNetConfig, FSMTConfig, FunnelConfig, FuyuConfig, GemmaConfig, Gemma2Config, Gemma3Config, Gemma3TextConfig, Gemma3nConfig, Gemma3nAudioConfig, Gemma3nTextConfig, Gemma3nVisionConfig, GitConfig, GlmConfig, Glm4Config, Glm4MoeConfig, Glm4vConfig, Glm4vMoeConfig, Glm4vMoeTextConfig, Glm4vTextConfig, GLPNConfig, GotOcr2Config, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GptOssConfig, GPTJConfig, GPTSanJapaneseConfig, GraniteConfig, GraniteMoeConfig, GraniteMoeHybridConfig, GraniteMoeSharedConfig, GraphormerConfig, GroundingDinoConfig, GroupViTConfig, HeliumConfig, HGNetV2Config, HieraConfig, HubertConfig, HunYuanDenseV1Config, HunYuanMoEV1Config, IBertConfig, IdeficsConfig, Idefics2Config, Idefics3Config, Idefics3VisionConfig, IJepaConfig, ImageGPTConfig, InformerConfig, InstructBlipConfig, InstructBlipVideoConfig, InternVLConfig, InternVLVisionConfig, JambaConfig, JanusConfig, JetMoeConfig, JukeboxConfig, Kosmos2Config, Kosmos2_5Config, KyutaiSpeechToTextConfig, LayoutLMConfig, LayoutLMv2Config, LayoutLMv3Config, LEDConfig, LevitConfig, Lfm2Config, Lfm2VlConfig, LightGlueConfig, LiltConfig, LlamaConfig, Llama4Config, Llama4TextConfig, LlavaConfig, LlavaNextConfig, LlavaNextVideoConfig, LlavaOnevisionConfig, LongcatFlashConfig, LongformerConfig, LongT5Config, LukeConfig, LxmertConfig, M2M100Config, MambaConfig, Mamba2Config, MarianConfig, MarkupLMConfig, Mask2FormerConfig, MaskFormerConfig, MaskFormerSwinConfig, MBartConfig, MCTCTConfig, MegaConfig, MegatronBertConfig, MetaClip2Config, MgpstrConfig, MimiConfig, MiniMaxConfig, MinistralConfig, MistralConfig, Mistral3Config, MixtralConfig, MLCDVisionConfig, MllamaConfig, MMGroundingDinoConfig, MobileBertConfig, MobileNetV1Config, MobileNetV2Config, MobileViTConfig, MobileViTV2Config, ModernBertConfig, ModernBertDecoderConfig, MoonshineConfig, MoshiConfig, MPNetConfig, MptConfig, MraConfig, MT5Config, MusicgenConfig, MusicgenMelodyConfig, MvpConfig, NatConfig, NemotronConfig, NezhaConfig, NllbMoeConfig, NystromformerConfig, OlmoConfig, Olmo2Config, Olmo3Config, OlmoeConfig, OmDetTurboConfig, OneFormerConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, Ovis2Config, Owlv2Config, OwlViTConfig, PaliGemmaConfig, ParakeetCTCConfig, ParakeetEncoderConfig, PatchTSMixerConfig, PatchTSTConfig, PegasusConfig, PegasusXConfig, PerceiverConfig, TimmWrapperConfig, PerceptionLMConfig, PersimmonConfig, PhiConfig, Phi3Config, Phi4MultimodalConfig, PhimoeConfig, PixtralVisionConfig, PLBartConfig, PoolFormerConfig, ProphetNetConfig, PvtConfig, PvtV2Config, QDQBertConfig, Qwen2Config, Qwen2_5_VLConfig, Qwen2_5_VLTextConfig, Qwen2AudioEncoderConfig, Qwen2MoeConfig, Qwen2VLConfig, Qwen2VLTextConfig, Qwen3Config, Qwen3MoeConfig, Qwen3NextConfig, Qwen3VLConfig, Qwen3VLMoeConfig, Qwen3VLMoeTextConfig, Qwen3VLTextConfig, RecurrentGemmaConfig, ReformerConfig, RegNetConfig, RemBertConfig, ResNetConfig, RetriBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RTDetrConfig, RTDetrV2Config, RwkvConfig, SamConfig, Sam2Config, Sam2HieraDetConfig, Sam2VideoConfig, Sam2VisionConfig, SamHQConfig, SamHQVisionConfig, SamVisionConfig, SeamlessM4TConfig, SeamlessM4Tv2Config, SeedOssConfig, SegformerConfig, SegGptConfig, SEWConfig, SEWDConfig, SiglipConfig, Siglip2Config, Siglip2VisionConfig, SiglipVisionConfig, SmolLM3Config, SmolVLMConfig, SmolVLMVisionConfig, Speech2TextConfig, SpeechT5Config, SplinterConfig, SqueezeBertConfig, StableLmConfig, Starcoder2Config, SwiftFormerConfig, SwinConfig, Swin2SRConfig, Swinv2Config, SwitchTransformersConfig, T5Config, T5GemmaConfig, TableTransformerConfig, TapasConfig, TextNetConfig, TimeSeriesTransformerConfig, TimesFmConfig, TimesformerConfig, TimmBackboneConfig, TimmWrapperConfig, TrajectoryTransformerConfig, TransfoXLConfig, TvltConfig, TvpConfig, UdopConfig, UMT5Config, UniSpeechConfig, UniSpeechSatConfig, UnivNetConfig, VanConfig, VaultGemmaConfig, VideoLlavaConfig, VideoMAEConfig, ViltConfig, VipLlavaConfig, VisionTextDualEncoderConfig, VisualBertConfig, ViTConfig, ViTHybridConfig, ViTMAEConfig, ViTMSNConfig, VitDetConfig, VitsConfig, VivitConfig, VJEPA2Config, VoxtralConfig, VoxtralEncoderConfig, Wav2Vec2Config, Wav2Vec2BertConfig, Wav2Vec2ConformerConfig, WavLMConfig, WhisperConfig, XCLIPConfig, XcodecConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, xLSTMConfig, XmodConfig, YolosConfig, YosoConfig, ZambaConfig, Zamba2Config. (VllmWorkerProcess pid=781008) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] Exception in worker VllmWorkerProcess while processing method load_model. (VllmWorkerProcess pid=781008) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] Traceback (most recent call last): (VllmWorkerProcess pid=781008) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] File "/Qwen3-Omni/vllm/executor/multiproc_worker_utils.py", line 226, in _run_worker_process (VllmWorkerProcess pid=781008) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] output = run_method(worker, method, args, kwargs) (VllmWorkerProcess pid=781008) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] File "/Qwen3-Omni/vllm/utils/__init__.py", line 3007, in run_method (VllmWorkerProcess pid=781008) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] return func(*args, **kwargs) (VllmWorkerProcess pid=781008) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] File "/Qwen3-Omni/vllm/worker/worker.py", line 211, in load_model (VllmWorkerProcess pid=781008) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] self.model_runner.load_model() (VllmWorkerProcess pid=781008) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] File "/Qwen3-Omni/vllm/worker/model_runner.py", line 1083, in load_model (VllmWorkerProcess pid=781008) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] self.model = get_model(vllm_config=self.vllm_config) (VllmWorkerProcess pid=781008) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] File "/Qwen3-Omni/vllm/model_executor/model_loader/__init__.py", line 118, in get_model (VllmWorkerProcess pid=781008) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] return loader.load_model(vllm_config=vllm_config, (VllmWorkerProcess pid=781008) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] File "/Qwen3-Omni/vllm/model_executor/model_loader/base_loader.py", line 44, in load_model (VllmWorkerProcess pid=781008) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] model = initialize_model(vllm_config=vllm_config, (VllmWorkerProcess pid=781008) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] File "/Qwen3-Omni/vllm/model_executor/model_loader/utils.py", line 63, in initialize_model (VllmWorkerProcess pid=781008) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] return model_class(vllm_config=vllm_config, prefix=prefix) (VllmWorkerProcess pid=781008) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] File "/Qwen3-Omni/vllm/model_executor/models/transformers.py", line 737, in __init__ (VllmWorkerProcess pid=781008) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] super().__init__(vllm_config=vllm_config, prefix=prefix) (VllmWorkerProcess pid=781008) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] File "/Qwen3-Omni/vllm/compilation/decorators.py", line 183, in __init__ (VllmWorkerProcess pid=781008) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] old_init(self, vllm_config=vllm_config, prefix=prefix, **kwargs) (VllmWorkerProcess pid=781008) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] File "/Qwen3-Omni/vllm/model_executor/models/transformers.py", line 661, in __init__ (VllmWorkerProcess pid=781008) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] super().__init__(vllm_config=vllm_config, prefix=prefix) (VllmWorkerProcess pid=781008) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] File "/Qwen3-Omni/vllm/model_executor/models/transformers.py", line 423, in __init__ (VllmWorkerProcess pid=781008) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] self.model: PreTrainedModel = AutoModel.from_config( (VllmWorkerProcess pid=781008) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] File "/anaconda3/envs/vis3/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 458, in from_config (VllmWorkerProcess pid=781008) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] raise ValueError( (VllmWorkerProcess pid=781008) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] ValueError: Unrecognized configuration class <class 'transformers.models.qwen3_omni_moe.configuration_qwen3_omni_moe.Qwen3OmniMoeConfig'> for this kind of AutoModel: AutoModel. (VllmWorkerProcess pid=781008) ERROR 11-05 11:35:05 [multiproc_worker_utils.py:232] Model type should be one of Aimv2Config, Aimv2VisionConfig, AlbertConfig, AlignConfig, AltCLIPConfig, ApertusConfig, ArceeConfig, AriaConfig, AriaTextConfig, ASTConfig, AutoformerConfig, AyaVisionConfig, BambaConfig, BarkConfig, BartConfig, BeitConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BitConfig, BitNetConfig, BlenderbotConfig, BlenderbotSmallConfig, BlipConfig, Blip2Config, Blip2QFormerConfig, BloomConfig, BltConfig, BridgeTowerConfig, BrosConfig, CamembertConfig, CanineConfig, ChameleonConfig, ChineseCLIPConfig, ChineseCLIPVisionConfig, ClapConfig, CLIPConfig, CLIPTextConfig, CLIPVisionConfig, CLIPSegConfig, ClvpConfig, LlamaConfig, CodeGenConfig, CohereConfig, Cohere2Config, Cohere2VisionConfig, ConditionalDetrConfig, ConvBertConfig, ConvNextConfig, ConvNextV2Config, CpmAntConfig, CsmConfig, CTRLConfig, CvtConfig, DFineConfig, DabDetrConfig, DacConfig, Data2VecAudioConfig, Data2VecTextConfig, Data2VecVisionConfig, DbrxConfig, DebertaConfig, DebertaV2Config, DecisionTransformerConfig, DeepseekV2Config, DeepseekV3Config, DeepseekVLConfig, DeepseekVLHybridConfig, DeformableDetrConfig, DeiTConfig, DepthProConfig, DetaConfig, DetrConfig, DiaConfig, DiffLlamaConfig, DinatConfig, Dinov2Config, Dinov2WithRegistersConfig, DINOv3ConvNextConfig, DINOv3ViTConfig, DistilBertConfig, DogeConfig, DonutSwinConfig, Dots1Config, DPRConfig, DPTConfig, EdgeTamConfig, EdgeTamVideoConfig, EdgeTamVisionConfig, EfficientFormerConfig, EfficientLoFTRConfig, EfficientNetConfig, ElectraConfig, Emu3Config, EncodecConfig, ErnieConfig, Ernie4_5Config, Ernie4_5_MoeConfig, ErnieMConfig, EsmConfig, EvollaConfig, Exaone4Config, FalconConfig, FalconH1Config, FalconMambaConfig, FastSpeech2ConformerConfig, FastSpeech2ConformerWithHifiGanConfig, FlaubertConfig, FlavaConfig, FlexOlmoConfig, Florence2Config, FNetConfig, FocalNetConfig, FSMTConfig, FunnelConfig, FuyuConfig, GemmaConfig, Gemma2Config, Gemma3Config, Gemma3TextConfig, Gemma3nConfig, Gemma3nAudioConfig, Gemma3nTextConfig, Gemma3nVisionConfig, GitConfig, GlmConfig, Glm4Config, Glm4MoeConfig, Glm4vConfig, Glm4vMoeConfig, Glm4vMoeTextConfig, Glm4vTextConfig, GLPNConfig, GotOcr2Config, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GptOssConfig, GPTJConfig, GPTSanJapaneseConfig, GraniteConfig, GraniteMoeConfig, GraniteMoeHybridConfig, GraniteMoeSharedConfig, GraphormerConfig, GroundingDinoConfig, GroupViTConfig, HeliumConfig, HGNetV2Config, HieraConfig, HubertConfig, HunYuanDenseV1Config, HunYuanMoEV1Config, IBertConfig, IdeficsConfig, Idefics2Config, Idefics3Config, Idefics3VisionConfig, IJepaConfig, ImageGPTConfig, InformerConfig, InstructBlipConfig, InstructBlipVideoConfig, InternVLConfig, InternVLVisionConfig, JambaConfig, JanusConfig, JetMoeConfig, JukeboxConfig, Kosmos2Config, Kosmos2_5Config, KyutaiSpeechToTextConfig, LayoutLMConfig, LayoutLMv2Config, LayoutLMv3Config, LEDConfig, LevitConfig, Lfm2Config, Lfm2VlConfig, LightGlueConfig, LiltConfig, LlamaConfig, Llama4Config, Llama4TextConfig, LlavaConfig, LlavaNextConfig, LlavaNextVideoConfig, LlavaOnevisionConfig, LongcatFlashConfig, LongformerConfig, LongT5Config, LukeConfig, LxmertConfig, M2M100Config, MambaConfig, Mamba2Config, MarianConfig, MarkupLMConfig, Mask2FormerConfig, MaskFormerConfig, MaskFormerSwinConfig, MBartConfig, MCTCTConfig, MegaConfig, MegatronBertConfig, MetaClip2Config, MgpstrConfig, MimiConfig, MiniMaxConfig, MinistralConfig, MistralConfig, Mistral3Config, MixtralConfig, MLCDVisionConfig, MllamaConfig, MMGroundingDinoConfig, MobileBertConfig, MobileNetV1Config, MobileNetV2Config, MobileViTConfig, MobileViTV2Config, ModernBertConfig, ModernBertDecoderConfig, MoonshineConfig, MoshiConfig, MPNetConfig, MptConfig, MraConfig, MT5Config, MusicgenConfig, MusicgenMelodyConfig, MvpConfig, NatConfig, NemotronConfig, NezhaConfig, NllbMoeConfig, NystromformerConfig, OlmoConfig, Olmo2Config, Olmo3Config, OlmoeConfig, OmDetTurboConfig, OneFormerConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, Ovis2Config, Owlv2Config, OwlViTConfig, PaliGemmaConfig, ParakeetCTCConfig, ParakeetEncoderConfig, PatchTSMixerConfig, PatchTSTConfig, PegasusConfig, PegasusXConfig, PerceiverConfig, TimmWrapperConfig, PerceptionLMConfig, PersimmonConfig, PhiConfig, Phi3Config, Phi4MultimodalConfig, PhimoeConfig, PixtralVisionConfig, PLBartConfig, PoolFormerConfig, ProphetNetConfig, PvtConfig, PvtV2Config, QDQBertConfig, Qwen2Config, Qwen2_5_VLConfig, Qwen2_5_VLTextConfig, Qwen2AudioEncoderConfig, Qwen2MoeConfig, Qwen2VLConfig, Qwen2VLTextConfig, Qwen3Config, Qwen3MoeConfig, Qwen3NextConfig, Qwen3VLConfig, Qwen3VLMoeConfig, Qwen3VLMoeTextConfig, Qwen3VLTextConfig, RecurrentGemmaConfig, ReformerConfig, RegNetConfig, RemBertConfig, ResNetConfig, RetriBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RTDetrConfig, RTDetrV2Config, RwkvConfig, SamConfig, Sam2Config, Sam2HieraDetConfig, Sam2VideoConfig, Sam2VisionConfig, SamHQConfig, SamHQVisionConfig, SamVisionConfig, SeamlessM4TConfig, SeamlessM4Tv2Config, SeedOssConfig, SegformerConfig, SegGptConfig, SEWConfig, SEWDConfig, SiglipConfig, Siglip2Config, Siglip2VisionConfig, SiglipVisionConfig, SmolLM3Config, SmolVLMConfig, SmolVLMVisionConfig, Speech2TextConfig, SpeechT5Config, SplinterConfig, SqueezeBertConfig, StableLmConfig, Starcoder2Config, SwiftFormerConfig, SwinConfig, Swin2SRConfig, Swinv2Config, SwitchTransformersConfig, T5Config, T5GemmaConfig, TableTransformerConfig, TapasConfig, TextNetConfig, TimeSeriesTransformerConfig, TimesFmConfig, TimesformerConfig, TimmBackboneConfig, TimmWrapperConfig, TrajectoryTransformerConfig, TransfoXLConfig, TvltConfig, TvpConfig, UdopConfig, UMT5Config, UniSpeechConfig, UniSpeechSatConfig, UnivNetConfig, VanConfig, VaultGemmaConfig, VideoLlavaConfig, VideoMAEConfig, ViltConfig, VipLlavaConfig, VisionTextDualEncoderConfig, VisualBertConfig, ViTConfig, ViTHybridConfig, ViTMAEConfig, ViTMSNConfig, VitDetConfig, VitsConfig, VivitConfig, VJEPA2Config, VoxtralConfig, VoxtralEncoderConfig, Wav2Vec2Config, Wav2Vec2BertConfig, Wav2Vec2ConformerConfig, WavLMConfig, WhisperConfig, XCLIPConfig, XcodecConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, xLSTMConfig, XmodConfig, YolosConfig, YosoConfig, ZambaConfig, Zamba2Config. [rank0]: Traceback (most recent call last): [rank0]: File "/Qwen3-Omni/web_demo.py", line 394, in <module> [rank0]: model, processor = _load_model_processor(args) [rank0]: File "/Qwen3-Omni/web_demo.py", line 38, in _load_model_processor [rank0]: model = LLM( [rank0]: File "/Qwen3-Omni/vllm/entrypoints/llm.py", line 285, in __init__ [rank0]: self.llm_engine = LLMEngine.from_engine_args( [rank0]: File "/Qwen3-Omni/vllm/engine/llm_engine.py", line 490, in from_engine_args [rank0]: return engine_cls.from_vllm_config( [rank0]: File "/Qwen3-Omni/vllm/engine/llm_engine.py", line 466, in from_vllm_config [rank0]: return cls( [rank0]: File "/Qwen3-Omni/vllm/engine/llm_engine.py", line 257, in __init__ [rank0]: self.model_executor = executor_class(vllm_config=vllm_config) [rank0]: File "/Qwen3-Omni/vllm/executor/executor_base.py", line 264, in __init__ [rank0]: super().__init__(*args, **kwargs) [rank0]: File "/Qwen3-Omni/vllm/executor/executor_base.py", line 54, in __init__ [rank0]: self._init_executor() [rank0]: File "/Qwen3-Omni/vllm/executor/mp_distributed_executor.py", line 126, in _init_executor [rank0]: self._run_workers("load_model", [rank0]: File "/Qwen3-Omni/vllm/executor/mp_distributed_executor.py", line 186, in _run_workers [rank0]: driver_worker_output = run_method(self.driver_worker, sent_method, [rank0]: File "/Qwen3-Omni/vllm/utils/__init__.py", line 3007, in run_method [rank0]: return func(*args, **kwargs) [rank0]: File "/Qwen3-Omni/vllm/worker/worker.py", line 211, in load_model [rank0]: self.model_runner.load_model() [rank0]: File "/Qwen3-Omni/vllm/worker/model_runner.py", line 1083, in load_model [rank0]: self.model = get_model(vllm_config=self.vllm_config) [rank0]: File "/Qwen3-Omni/vllm/model_executor/model_loader/__init__.py", line 118, in get_model [rank0]: return loader.load_model(vllm_config=vllm_config, [rank0]: File "/Qwen3-Omni/vllm/model_executor/model_loader/base_loader.py", line 44, in load_model [rank0]: model = initialize_model(vllm_config=vllm_config, [rank0]: File "/Qwen3-Omni/vllm/model_executor/model_loader/utils.py", line 63, in initialize_model [rank0]: return model_class(vllm_config=vllm_config, prefix=prefix) [rank0]: File "/Qwen3-Omni/vllm/model_executor/models/transformers.py", line 737, in __init__ [rank0]: super().__init__(vllm_config=vllm_config, prefix=prefix) [rank0]: File "/Qwen3-Omni/vllm/compilation/decorators.py", line 183, in __init__ [rank0]: old_init(self, vllm_config=vllm_config, prefix=prefix, **kwargs) [rank0]: File "/Qwen3-Omni/vllm/model_executor/models/transformers.py", line 661, in __init__ [rank0]: super().__init__(vllm_config=vllm_config, prefix=prefix) [rank0]: File "/Qwen3-Omni/vllm/model_executor/models/transformers.py", line 423, in __init__ [rank0]: self.model: PreTrainedModel = AutoModel.from_config( [rank0]: File "/anaconda3/envs/vis3/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 458, in from_config [rank0]: raise ValueError( [rank0]: ValueError: Unrecognized configuration class <class 'transformers.models.qwen3_omni_moe.configuration_qwen3_omni_moe.Qwen3OmniMoeConfig'> for this kind of AutoModel: AutoModel. [rank0]: Model type should be one of Aimv2Config, Aimv2VisionConfig, AlbertConfig, AlignConfig, AltCLIPConfig, ApertusConfig, ArceeConfig, AriaConfig, AriaTextConfig, ASTConfig, AutoformerConfig, AyaVisionConfig, BambaConfig, BarkConfig, BartConfig, BeitConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BitConfig, BitNetConfig, BlenderbotConfig, BlenderbotSmallConfig, BlipConfig, Blip2Config, Blip2QFormerConfig, BloomConfig, BltConfig, BridgeTowerConfig, BrosConfig, CamembertConfig, CanineConfig, ChameleonConfig, ChineseCLIPConfig, ChineseCLIPVisionConfig, ClapConfig, CLIPConfig, CLIPTextConfig, CLIPVisionConfig, CLIPSegConfig, ClvpConfig, LlamaConfig, CodeGenConfig, CohereConfig, Cohere2Config, Cohere2VisionConfig, ConditionalDetrConfig, ConvBertConfig, ConvNextConfig, ConvNextV2Config, CpmAntConfig, CsmConfig, CTRLConfig, CvtConfig, DFineConfig, DabDetrConfig, DacConfig, Data2VecAudioConfig, Data2VecTextConfig, Data2VecVisionConfig, DbrxConfig, DebertaConfig, DebertaV2Config, DecisionTransformerConfig, DeepseekV2Config, DeepseekV3Config, DeepseekVLConfig, DeepseekVLHybridConfig, DeformableDetrConfig, DeiTConfig, DepthProConfig, DetaConfig, DetrConfig, DiaConfig, DiffLlamaConfig, DinatConfig, Dinov2Config, Dinov2WithRegistersConfig, DINOv3ConvNextConfig, DINOv3ViTConfig, DistilBertConfig, DogeConfig, DonutSwinConfig, Dots1Config, DPRConfig, DPTConfig, EdgeTamConfig, EdgeTamVideoConfig, EdgeTamVisionConfig, EfficientFormerConfig, EfficientLoFTRConfig, EfficientNetConfig, ElectraConfig, Emu3Config, EncodecConfig, ErnieConfig, Ernie4_5Config, Ernie4_5_MoeConfig, ErnieMConfig, EsmConfig, EvollaConfig, Exaone4Config, FalconConfig, FalconH1Config, FalconMambaConfig, FastSpeech2ConformerConfig, FastSpeech2ConformerWithHifiGanConfig, FlaubertConfig, FlavaConfig, FlexOlmoConfig, Florence2Config, FNetConfig, FocalNetConfig, FSMTConfig, FunnelConfig, FuyuConfig, GemmaConfig, Gemma2Config, Gemma3Config, Gemma3TextConfig, Gemma3nConfig, Gemma3nAudioConfig, Gemma3nTextConfig, Gemma3nVisionConfig, GitConfig, GlmConfig, Glm4Config, Glm4MoeConfig, Glm4vConfig, Glm4vMoeConfig, Glm4vMoeTextConfig, Glm4vTextConfig, GLPNConfig, GotOcr2Config, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GptOssConfig, GPTJConfig, GPTSanJapaneseConfig, GraniteConfig, GraniteMoeConfig, GraniteMoeHybridConfig, GraniteMoeSharedConfig, GraphormerConfig, GroundingDinoConfig, GroupViTConfig, HeliumConfig, HGNetV2Config, HieraConfig, HubertConfig, HunYuanDenseV1Config, HunYuanMoEV1Config, IBertConfig, IdeficsConfig, Idefics2Config, Idefics3Config, Idefics3VisionConfig, IJepaConfig, ImageGPTConfig, InformerConfig, InstructBlipConfig, InstructBlipVideoConfig, InternVLConfig, InternVLVisionConfig, JambaConfig, JanusConfig, JetMoeConfig, JukeboxConfig, Kosmos2Config, Kosmos2_5Config, KyutaiSpeechToTextConfig, LayoutLMConfig, LayoutLMv2Config, LayoutLMv3Config, LEDConfig, LevitConfig, Lfm2Config, Lfm2VlConfig, LightGlueConfig, LiltConfig, LlamaConfig, Llama4Config, Llama4TextConfig, LlavaConfig, LlavaNextConfig, LlavaNextVideoConfig, LlavaOnevisionConfig, LongcatFlashConfig, LongformerConfig, LongT5Config, LukeConfig, LxmertConfig, M2M100Config, MambaConfig, Mamba2Config, MarianConfig, MarkupLMConfig, Mask2FormerConfig, MaskFormerConfig, MaskFormerSwinConfig, MBartConfig, MCTCTConfig, MegaConfig, MegatronBertConfig, MetaClip2Config, MgpstrConfig, MimiConfig, MiniMaxConfig, MinistralConfig, MistralConfig, Mistral3Config, MixtralConfig, MLCDVisionConfig, MllamaConfig, MMGroundingDinoConfig, MobileBertConfig, MobileNetV1Config, MobileNetV2Config, MobileViTConfig, MobileViTV2Config, ModernBertConfig, ModernBertDecoderConfig, MoonshineConfig, MoshiConfig, MPNetConfig, MptConfig, MraConfig, MT5Config, MusicgenConfig, MusicgenMelodyConfig, MvpConfig, NatConfig, NemotronConfig, NezhaConfig, NllbMoeConfig, NystromformerConfig, OlmoConfig, Olmo2Config, Olmo3Config, OlmoeConfig, OmDetTurboConfig, OneFormerConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, Ovis2Config, Owlv2Config, OwlViTConfig, PaliGemmaConfig, ParakeetCTCConfig, ParakeetEncoderConfig, PatchTSMixerConfig, PatchTSTConfig, PegasusConfig, PegasusXConfig, PerceiverConfig, TimmWrapperConfig, PerceptionLMConfig, PersimmonConfig, PhiConfig, Phi3Config, Phi4MultimodalConfig, PhimoeConfig, PixtralVisionConfig, PLBartConfig, PoolFormerConfig, ProphetNetConfig, PvtConfig, PvtV2Config, QDQBertConfig, Qwen2Config, Qwen2_5_VLConfig, Qwen2_5_VLTextConfig, Qwen2AudioEncoderConfig, Qwen2MoeConfig, Qwen2VLConfig, Qwen2VLTextConfig, Qwen3Config, Qwen3MoeConfig, Qwen3NextConfig, Qwen3VLConfig, Qwen3VLMoeConfig, Qwen3VLMoeTextConfig, Qwen3VLTextConfig, RecurrentGemmaConfig, ReformerConfig, RegNetConfig, RemBertConfig, ResNetConfig, RetriBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RTDetrConfig, RTDetrV2Config, RwkvConfig, SamConfig, Sam2Config, Sam2HieraDetConfig, Sam2VideoConfig, Sam2VisionConfig, SamHQConfig, SamHQVisionConfig, SamVisionConfig, SeamlessM4TConfig, SeamlessM4Tv2Config, SeedOssConfig, SegformerConfig, SegGptConfig, SEWConfig, SEWDConfig, SiglipConfig, Siglip2Config, Siglip2VisionConfig, SiglipVisionConfig, SmolLM3Config, SmolVLMConfig, SmolVLMVisionConfig, Speech2TextConfig, SpeechT5Config, SplinterConfig, SqueezeBertConfig, StableLmConfig, Starcoder2Config, SwiftFormerConfig, SwinConfig, Swin2SRConfig, Swinv2Config, SwitchTransformersConfig, T5Config, T5GemmaConfig, TableTransformerConfig, TapasConfig, TextNetConfig, TimeSeriesTransformerConfig, TimesFmConfig, TimesformerConfig, TimmBackboneConfig, TimmWrapperConfig, TrajectoryTransformerConfig, TransfoXLConfig, TvltConfig, TvpConfig, UdopConfig, UMT5Config, UniSpeechConfig, UniSpeechSatConfig, UnivNetConfig, VanConfig, VaultGemmaConfig, VideoLlavaConfig, VideoMAEConfig, ViltConfig, VipLlavaConfig, VisionTextDualEncoderConfig, VisualBertConfig, ViTConfig, ViTHybridConfig, ViTMAEConfig, ViTMSNConfig, VitDetConfig, VitsConfig, VivitConfig, VJEPA2Config, VoxtralConfig, VoxtralEncoderConfig, Wav2Vec2Config, Wav2Vec2BertConfig, Wav2Vec2ConformerConfig, WavLMConfig, WhisperConfig, XCLIPConfig, XcodecConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, xLSTMConfig, XmodConfig, YolosConfig, YosoConfig, ZambaConfig, Zamba2Config. [rank0]:[W1105 11:35:06.754006060 ProcessGroupNCCL.cpp:1479] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) /anaconda3/envs/vis3/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked shared_memory objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d ' ` ### Who can help? I have cloned this repo [Qwen3-Omni](https://github.com/QwenLM/Qwen3-Omni) I have downloaded the [Qwen3-Omni](https://huggingface.co/Qwen/Qwen3-Omni-30B-A3B-Instruct) locally from huggingface `$python web_demo.py` ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1) Download [Qwen3-Omni](https://github.com/QwenLM/Qwen3-Omni) 2) Download the models [Qwen3-Omni-Instruct](https://huggingface.co/Qwen/Qwen3-Omni-30B-A3B-Instruct) or [Qwen3-Omni-Thinking](https://huggingface.co/Qwen/Qwen3-Omni-30B-A3B-Thinking) from huggingface 3) Install the requirements from the Qwen3-Omni git instruction with mentioned command lines like below ``` git clone -b qwen3_omni https://github.com/wangxiongts/vllm.git cd vllm pip install -r requirements/build.txt pip install -r requirements/cuda.txt export VLLM_PRECOMPILED_WHEEL_LOCATION=https://wheels.vllm.ai/a5dd03c1ebc5e4f56f3c9d3dc0436e9c582c978f/vllm-0.9.2-cp38-abi3-manylinux1_x86_64.whl VLLM_USE_PRECOMPILED=1 pip install -e . -v --no-build-isolation # If you meet an "Undefined symbol" error while using VLLM_USE_PRECOMPILED=1, please use "pip install -e . -v" to build from source. # Install the Transformers pip install git+https://github.com/huggingface/transformers pip install accelerate pip install qwen-omni-utils -U pip install -U flash-attn --no-build-isolation pip install gradio==5.44.1 gradio_client==1.12.1 soundfile==0.13.1 ``` Than in the envs run the command below ` $python web_demo.py` ### Expected behavior As normally it should load the model without issues and run the server and run the inference without problems with the loaded video for analysis.
closed
completed
false
5
[ "bug" ]
[]
2025-11-05T11:39:39Z
2026-02-11T23:54:10Z
2025-12-27T08:03:07Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
Tortoise17
36,593,708
MDQ6VXNlcjM2NTkzNzA4
User
false
huggingface/transformers
3,604,732,641
I_kwDOCUB6oc7W29rh
42,111
https://github.com/huggingface/transformers/issues/42111
https://api.github.com/repos/huggingface/transformers/issues/42111
Add thinking-budget support (max_thinking_tokens) for reasoning-capable chat models
### Feature request A built-in way to cap how many tokens a reasoning model spends inside its ``<think> … </think>`` block. Today, we can only control the total response length via ``max_new_tokens``. No parameter limits the internal reasoning segment when ``enable_thinking=True``. ### Motivation - Reasoning models (e.g., Qwen3 series) often produce very long thought blocks, which can blow past latency budgets before the final answer starts. - Users need a simple, model-agnostic control to bound that “thinking” cost without disabling reasoning entirely. - The Qwen docs (https://qwen.readthedocs.io/en/latest/getting_started/quickstart.html#thinking-budget) already describe a brute-force approach (two-step generation) to implement “thinking budgets”. ### Your contribution I want to submit a PR that: - Extends ``GenerationConfig`` with: ``max_thinking_tokens``: integer budget for reasoning tokens. ``begin_thinking_token_id / end_thinking_token_id``: marker IDs so generation knows where the thinking span begins/ends. - Add a ``MaxThinkingTokensLogitsProcessor`` that watches the active ``<think>`` block. Once the budget is reached, it forces end_thinking_token_id, ensuring the model exits reasoning and continues with the final response. - Document the new parameter in reasoning-model guides (EXAONE, CWM, etc.) and show how to wire the thinking-token IDs until configs do it automatically. - Provide unit coverage so ``_get_logits_processor`` injects the new processor whenever the config is fully specified.
open
null
false
1
[ "Feature request" ]
[]
2025-11-09T10:09:11Z
2026-02-14T05:37:15Z
null
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
AndresAlgaba
35,764,158
MDQ6VXNlcjM1NzY0MTU4
User
false
huggingface/transformers
3,607,099,901
I_kwDOCUB6oc7W__n9
42,116
https://github.com/huggingface/transformers/issues/42116
https://api.github.com/repos/huggingface/transformers/issues/42116
Integration of the SINQ quantization strategy
### Feature request Adding support for **SINQ** quantization for Hugging Face compatible models, enabling users to apply it directly through the configuration settings. The **SINQ** quantization method, recently introduced in the paper [SINQ: Sinkhorn-Normalized Quantization for Calibration-Free Low-Precision LLM Weights](https://huggingface.co/papers/2509.22944), has quickly gained significant attention. It demonstrates superior effectiveness compared to existing approaches such as HQQ, while also offering substantially faster quantization times. ### Motivation Integrating the **SINQ** quantization algorithm into the Transformers library (as already done for _HQQ_, _AWQ_, _HIGGS_, ...) would allow users to easily quantize models by simply specifying the desired quantization method and parameters within the configuration settings, substituting the need to consult and directly use custom code from the [SINQ repository](https://github.com/huawei-csl/SINQ). This integration aims to streamline and simplify the quantization process while leveraging the existing features and infrastructure of the Transformers library. ### Your contribution I’m going to submit a pull request that includes the implementation and testing of the SINQ quantization integration. This integration enables users to specify the quantization method directly through the configuration, as shown below: ```bash cfg = SinqConfig( nbits=4, group_size=64, tiling_mode="1D", method="sinq", dtype="auto", modules_to_not_convert=["lm_head"], device="cuda:1" ) ``` Once the configuration is defined, the model can be quantized simply by calling the ```from_pretrained()``` function with the specified configuration settings.
closed
completed
false
8
[ "Feature request" ]
[]
2025-11-10T09:44:32Z
2026-02-16T15:08:43Z
2026-02-16T15:08:43Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
ChiaraBoretti
83,216,540
MDQ6VXNlcjgzMjE2NTQw
User
false
huggingface/transformers
3,619,868,194
I_kwDOCUB6oc7Xws4i
42,175
https://github.com/huggingface/transformers/issues/42175
https://api.github.com/repos/huggingface/transformers/issues/42175
Tensorflow not include in the backend when using pip install '.[torch]'
### System Info I install the program successfully when using `pip install -e .[torch]`. However, I encounter the below issue when using ``pip install '.[torch]'``: ``` (omni) pqyin@proj54:/data2/pqyin/transformers$ python Python 3.13.9 | packaged by Anaconda, Inc. | (main, Oct 21 2025, 19:16:10) [GCC 11.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import transformers Traceback (most recent call last): File "<python-input-0>", line 1, in <module> import transformers File "/home/pqyin/miniconda3/envs/omni/lib/python3.13/site-packages/transformers/__init__.py", line 768, in <module> sys.modules[__name__] = _LazyModule( ~~~~~~~~~~~^ __name__, ^^^^^^^^^ ...<3 lines>... extra_objects={"__version__": __version__}, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/home/pqyin/miniconda3/envs/omni/lib/python3.13/site-packages/transformers/utils/import_utils.py", line 1847, in __init__ raise ValueError( f"Backend should be defined in the BACKENDS_MAPPING. Offending backend: {backend}" ) ValueError: Backend should be defined in the BACKENDS_MAPPING. Offending backend: tf ``` I check the module, I found that two backends `tensorflow_text` and `tf` are imported from `{'models.bert.tokenization_bert_tf': {'TFBertTokenizer'}}`, but `tf` and `tensorflow_text` are not in `BACKENDS_MAPPING`. I found that the `tokenization_bert_tf` file is already deleted in the latest version, but install from pre-compiled whl file still contains the `tokenization_bert_tf` files. Thus incurred the import issue. BTW, the deperated `tokenization_bert_tf` could be removed from type checking. ``` # line 25 in bert.__init__.py from .tokenization_bert_tf import * ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Use `pip install '.[torch]` ### Expected behavior ``` File "/home/pqyin/miniconda3/envs/omni/lib/python3.13/site-packages/transformers/__init__.py", line 777, in <module> sys.modules[__name__] = _LazyModule( ~~~~~~~~~~~^ __name__, ^^^^^^^^^ ...<3 lines>... extra_objects={"__version__": __version__}, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/home/pqyin/miniconda3/envs/omni/lib/python3.13/site-packages/transformers/utils/import_utils.py", line 1847, in __init__ raise ValueError( f"Backend should be defined in the BACKENDS_MAPPING. Offending backend: {backend}" ) ValueError: Backend should be defined in the BACKENDS_MAPPING. Offending backend: tensorflow_text ```
closed
completed
false
2
[ "bug" ]
[]
2025-11-13T07:21:50Z
2026-02-13T22:47:40Z
2025-11-18T14:49:34Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
yinpeiqi
60,515,999
MDQ6VXNlcjYwNTE1OTk5
User
false
huggingface/transformers
3,623,280,797
I_kwDOCUB6oc7X9uCd
42,199
https://github.com/huggingface/transformers/issues/42199
https://api.github.com/repos/huggingface/transformers/issues/42199
Cardinality error is incorrect for models derived from DETR that do not have an explicit background class
## Issue For DETR variants, the cardinality errors that are reported during training are incorrect. This was reported in the DeformableDETR repository, and was acknowledged but not resolved: https://github.com/fundamentalvision/Deformable-DETR/issues/24 Since all the derived models no longer include an explicit background class, the returned cardinality error is wrong. While `loss_cardinality` is not strictly a loss, it is a bug to include it. It's confusing to users because it's returned as part of the loss dict. It also breaks a common pattern in logging frameworks of calling `logger.log('total_loss', sum(loss_dict.items())`, because `loss_cardinality` is both huge compared to the real losses, and does not change during training in a meaningful way. ## Context In the original DETR implementation (broadly copied here), the class embedding outputs +1 logits to explicitly model the background (which is tacked on as index `-1`). Hence the recommendation to instantiate models with `max(class_ids)`. In later works, the loss function uses a sigmoid/focal loss and the output logits have len(num_classes). It's mentioned in passing in the DeformableDETR paper, but not clear to me if it was strictly necessary for the decoder design or faster convergence. I'm not aware of an earlier experiment, but every other paper seems to now use that convention. The ImageLoss function has this method: https://github.com/huggingface/transformers/blob/8cb5963cc22174954e7dca2c0a3320b7dc2f4edc/src/transformers/loss/loss_for_object_detection.py#L143-L157 Note that the assumption is “no object” is explicitly class index -1. In other version, like Deformable DETR, ImageLoss is sub-classed. But since this index actually refers to a genuine class, what this actually computes is the number of predictions that are classes[-1] versus ground truth. As an obvious example, in the case of a 1-class model, the output is always `num_queries - num_gt`. ## Proposed solution Remove the entry in the `loss_dict` entirely and don't calculate cardinality error. I'm happy to submit a PR for this. Alternatively we could compute it based on a score threshold. A simple hack would be to set a 0.5 threshold for sigmoid and use that to identify foreground classes. That wouldn't break existing code and would give a useful signal during training. ### Who can help? _No response_ ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction Run the forward pass with any of these models, including targets. This is a 1-class model: <img width="1405" height="321" alt="Image" src="https://github.com/user-attachments/assets/a0da05e3-66e2-4a7d-8193-70c5bacba742" /> Or run any of the training examples and manually feed through the loss function. ### Expected behavior Either return a true cardinality error or nothing. However as above - it doesn't make sense because the model returns all queries as foreground predictions.
closed
completed
false
10
[ "bug" ]
[]
2025-11-13T23:46:02Z
2026-02-09T17:30:44Z
2026-02-09T17:30:44Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
jveitchmichaelis
3,159,591
MDQ6VXNlcjMxNTk1OTE=
User
false
huggingface/transformers
3,623,324,953
I_kwDOCUB6oc7X940Z
42,200
https://github.com/huggingface/transformers/issues/42200
https://api.github.com/repos/huggingface/transformers/issues/42200
Request of rewriting implementation of prediction_step in trainer.py
### System Info Any system. Because it's a problem coming from source code. ### Who can help? @SunMarc ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction Hi, i am talking about an issue that was reported 5 years ago but still exists in 2025, specifically, 13th Nov, 2025. I quote one of the issues that was discussed before, ignored by sgugger. Please find the link below https://discuss.huggingface.co/t/cuda-out-of-memory-when-using-trainer-with-compute-metrics/2941 When i was about to fine tune a LLM today, i ran into the same issue but i got saved by one folk's solution provided in this discussion. How to reproduce (you should have a GPU, no quantization, just full fine tuning): 1. Find a random decoder-only text2text LLM, let's say Qwen3 0.6B. 2. Prepare a train dataset (>0 rows) and eval dataset (>850 rows). 3. Set eval_on_start = True, either TrainingArguments or SFTConfig could work. 4. Implement your own compute_metrics BUT DON'T implement preprocess_logits_for_metrics. 5. start training (don't need deepspeed or accelerate, just trainer.train()) What would happen? First it would go through the evaluation dataset because i set eval_on_start=True, the model would go really fast originally but then it would go extremely slow. Finally, you would get an error that says numpy is trying to allocate a ridiculously big array to memory. <img width="1567" height="986" alt="Image" src="https://github.com/user-attachments/assets/e1885324-fb09-48b6-8bfd-d36306c2a156" /> One of the folk who seems to be inspired by example code provided the implementation of preprocess_logits_for_metrics, which solved problem i encountered perfectly. The evaluation run is done within 2 mins. Why it would happen? I briefly go over the source code of evaluation_loop and i located prediction_step. prediction_step says it would return a tuple of three optional torch.Tensor (loss, logits, label). <img width="719" height="68" alt="Image" src="https://github.com/user-attachments/assets/537032b1-9371-4852-bed8-8f31cd6a0437" /> But most of the time, the returned logits is a tuple. Why? if you look at the the function that processes logits before logits is returned: <img width="535" height="140" alt="Image" src="https://github.com/user-attachments/assets/d6f7f3b1-6c2a-4298-b4c2-f0ab85fa88cf" /> This function would receive all kinds of "tensors". The type of "tensors" could be list, tuple, Mapping or torch.Tensor. Does it change the variable, called "tensors", from other data types to torch.Tensor? No. type(tensors)(........) would preserve the original type of tensors. It means if the variable "tensors" (i hate this variable name because it is misleading and confusing) is a tuple, after this function, it's still a tuple!!!!! It's a recursive function btw. I would love doing recursion in programming competition, but not in huggingface codebase!!! It also implies a fact that the input of nested_detach could be complexly nested, like ([],()) So this function doesn't guarantee the logits is a torch.Tensor. Nor does the implementation of prediction_step before nested_detach was called in prediction_step <img width="702" height="759" alt="Image" src="https://github.com/user-attachments/assets/451982b4-648b-4876-a2b7-c9d748899fd1" /> So, the logits is not always a torch.Tensor, which is contradictory to what the type hint says, what did developers do? They developed preprocess_logits_for_metrics. So that user could fix it ON THEIR OWN IMPLEMENTATION. (preprocess_logits_for_metrics is called within evaluation_loop to clean the mess, specifically, logits, returned by prediction_step()) <img width="803" height="772" alt="Image" src="https://github.com/user-attachments/assets/c7494018-e282-4577-b824-3db9c9e57609" /> It's such a lazy fix. Why a regular user is expected to implement their own preprocess_logits_f or_metrics, to deal with a poorly-designed prediction_step? It has been 5 years since the person who reported it......... If a user-defined compute_metrics is not provided to Trainer or SFTTrainer, the prediction_step would return (loss, none, none), which skips the whole problem and this is why users said the issue of "slow evaluation" is gone when they don't provide compute_metrics. I would like to make a Pull Request to fix it but i don't have enough time and energy to do this massive amount of work. A temporary fix is to let users know when they need to make their own compute_metrics, they also have to implement preprocess_logits_for_metrics. Different models would have different styles of implementations but for text2text decoder only LLM. <img width="687" height="78" alt="Image" src="https://github.com/user-attachments/assets/125fffe3-d8cc-44c7-9a96-35a11500d975" /> (Another thing is that the variable called "labels" in all the implementations of preprocess_logits_for_metrics i have ever seen so far, is ignored. what is the meaning of "labels" here?) The folk who provided solution to help other users in the dicussion (i attached earlier in this post) said there might be a memory leak in Trainer that cause the extremely slow evaluation run. The implementation of preprocess_logits_for_metrics might just hide the actual problem further, rather than solving it. ### Expected behavior the expected behavior of prediction_step is it would actually return a tuple of three optional torch.Tensor, as implied by its type hint.
open
null
false
4
[ "Good Second Issue", "bug" ]
[]
2025-11-14T00:13:40Z
2026-02-24T22:09:56Z
null
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
Yacklin
139,425,274
U_kgDOCE91-g
User
false
huggingface/transformers
3,624,126,333
I_kwDOCUB6oc7YA8d9
42,202
https://github.com/huggingface/transformers/issues/42202
https://api.github.com/repos/huggingface/transformers/issues/42202
Deformable DETR Finetuning breaks for any dataset
### System Info - GPU: V100 - torch2.6.0+cu126 - transformers 4.57.1 ### Who can help? Hi @yonigozlan @molbap @NielsRogge Thanks for the awesome work on vision models! I've been trying to finetune the Deformable DETR models (SenseTime/deformable-detr-with-box-refine-two-stage) for the past few days on a custom object detection dataset using the finetuning DETR notebook suggested in the Docs (https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DETR/Fine_tuning_DetrForObjectDetection_on_custom_dataset_(balloon).ipynb) and have swapped out the model name where needed, to the one Deformable DETR model mentioned above , and I have constantly been running into errors, two in particular: > File /libraries/env/lib/python3.11/site-packages/transformers/loss/loss_deformable_detr.py:55, in <listcomp>(.0) 52 cost_matrix = cost_matrix.view(batch_size, num_queries, -1).cpu() 54 sizes = [len(v["boxes"]) for v in targets] ---> 55 indices = [linear_sum_assignment(c[i]) for i, c in enumerate(cost_matrix.split(sizes, -1))] 56 return [(torch.as_tensor(i, dtype=torch.int64), torch.as_tensor(j, dtype=torch.int64)) for i, j in indices] \ ValueError: matrix contains invalid numeric entries (on several forums this was addressed by turning off AMP, which in your example notebook using Trainer, can be done by passing precision = 32) and when I can get that to work, I am immediately hit by - > "/libraries/env/lib/python3.11/site-packages/transformers/loss/loss_for_object_detection.py", line 418, in generalized_box_iou [rank1]: raise ValueError(f"boxes1 must be in [x0, y0, x1, y1] (corner) format, but got {boxes1}") [rank1]: ValueError: boxes1 must be in [x0, y0, x1, y1] (corner) format, but got tensor([[nan, nan, nan, nan], [rank1]: [nan, nan, nan, nan], [rank1]: [nan, nan, nan, nan], [rank1]: ..., [rank1]: [nan, nan, nan, nan], [rank1]: [nan, nan, nan, nan], [rank1]: [nan, nan, nan, nan]], device='cuda:1') Epoch 0: 0%| | 1/4358 [00:03<4:23:53, 0.28it/s, v_num=20, training_loss_step=nan.0] I for the life of me can't figure out what is going on. I tried the same notebook code with my dataset using the original DETR model (facebook/detr-resnet-50) listed and it works perfectly well. For sanity, I went back and tried to run the balloon dataset as-is in the notebook (https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DETR/Fine_tuning_DetrForObjectDetection_on_custom_dataset_(balloon).ipynb), but with the Deformable DETR model and processor and I run into the same errors, kinda proving that my data wasn't the issue. Would love to get your help in understanding why the same notebook doesn't work with the Deformable DETR checkpoints I linked above, since it worked perfectly well on the DETR one's. Other Env details: - GPU: V100 - torch2.6.0+cu126 - transformers 4.57.1 The deformable family of models would suit my usecase well and hence have been trying to make it work. Thank you, love the work the team puts in and appreciate the effort. ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction The exact steps followed in https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DETR/Fine_tuning_DetrForObjectDetection_on_custom_dataset_(balloon).ipynb but with the SenseTime/deformable-detr-with-box-refine-two-stage (or other SenseTime/deformable-detr-*) models. ### Expected behavior Expecting it to work similar to the DETR finetuning
closed
completed
false
6
[ "bug" ]
[]
2025-11-14T06:29:52Z
2026-02-08T16:56:36Z
2026-02-08T16:56:36Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
iamsashank09
26,921,144
MDQ6VXNlcjI2OTIxMTQ0
User
false
huggingface/transformers
3,628,809,478
I_kwDOCUB6oc7YSz0G
42,222
https://github.com/huggingface/transformers/issues/42222
https://api.github.com/repos/huggingface/transformers/issues/42222
All vitpose model were brokentransformers/models/vitpose_
### System Info transformers/models/vitpose_backbone/modeling_vitpose_backbone.py", line 304, in forward raise ValueError(transformers/models/vitpose_backbone/modeling_vitpose_backbone.py", line 304, in forward raise ValueError( ValueError: dataset_index must be provided when using multiple experts (num_experts=6). Please provide dataset_index to the forward pass. ValueError: dataset_index must be provided when using multiple experts (num_experts=6). Please provide dataset_index to the forward pass. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction transformers/models/vitpose_backbone/modeling_vitpose_backbone.py", line 304, in forward raise ValueError( ValueError: dataset_index must be provided when using multiple experts (num_experts=6). Please provide dataset_index to the forward pass. ### Expected behavior transformers/models/vitpose_backbone/modeling_vitpose_backbone.py", line 304, in forward raise ValueError( ValueError: dataset_index must be provided when using multiple experts (num_experts=6). Please provide dataset_index to the forward pass.
closed
completed
false
11
[ "bug" ]
[]
2025-11-15T14:56:04Z
2026-02-09T08:11:37Z
2026-02-09T08:11:37Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
lucasjinreal
21,303,438
MDQ6VXNlcjIxMzAzNDM4
User
false
huggingface/transformers
3,634,466,348
I_kwDOCUB6oc7YoY4s
42,249
https://github.com/huggingface/transformers/issues/42249
https://api.github.com/repos/huggingface/transformers/issues/42249
`parse_response` should drop EOS
When using `parse_response`, I noticed it includes the EOS token in the `content`. However, the EOS token should be excluded, as it adds an unwanted EOS before tool calls during subsequent formatting. ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-0.6B") # Why using smollm schema for Qwen3? See https://github.com/huggingface/transformers/issues/42220 smollm_schema = { "x-regex": r"(?:<think>\n?(?P<thinking>.+?)\n?</think>)?\s*(?:<tool_call>(?P<tool_calls>.+?)</tool_call>)?\s*(?P<content>.+?)?\s*(?:<\|im_end\|>|$)", "type": "object", "properties": { "role": {"const": "assistant"}, "content": {"type": "string"}, "thinking": {"type": "string"}, "tool_calls": { "x-parser": "json", "x-parser-args": {"transform": "[{type: 'function', function: @}]"}, "type": "array", "items": { "type": "object", "properties": { "type": {"const": "function"}, "function": { "type": "object", "properties": { "name": {"type": "string"}, "arguments": { "type": "object", "additionalProperties": {}, }, }, }, }, }, }, }, } tokenizer.response_schema = smollm_schema assistant_text = '<tool_call>\n{"name": "multiply", "arguments": {"a": 3, "b": 4}}\n</tool_call><|im_end|>' assistant_message = tokenizer.parse_response(assistant_text) # assistant_message = {'role': 'assistant', 'content': '<|im_end|>', 'tool_calls': [{'type': 'function', 'function': {'name': 'multiply', 'arguments': {'a': 3, 'b': 4}}}]} # ^^^^^^^^^^^^^^^^^^^^^^^^ # extra eos in the content processed = tokenizer.apply_chat_template( [assistant_message], tokenize=False, enable_thinking=False, ) # '<|im_start|>assistant\n<|im_end|>\n<tool_call>\n{"name": "multiply", "arguments": {"a": 3, "b": 4}}\n</tool_call><|im_end|>\n' # ^^^^^^^^^^^^ # extra eos before tool_call block ```
closed
completed
false
7
[]
[]
2025-11-17T18:14:56Z
2026-02-15T08:04:47Z
2026-02-15T08:04:47Z
MEMBER
null
20260325T173244Z
2026-03-25T17:32:44Z
qgallouedec
45,557,362
MDQ6VXNlcjQ1NTU3MzYy
User
false