repo
string
github_id
int64
github_node_id
string
number
int64
html_url
string
api_url
string
title
string
body
string
state
string
state_reason
string
locked
bool
comments_count
int64
labels
list
assignees
list
created_at
string
updated_at
string
closed_at
string
author_association
string
milestone_title
string
snapshot_id
string
extracted_at
string
author_login
string
author_id
int64
author_node_id
string
author_type
string
author_site_admin
bool
huggingface/transformers
4,026,192,695
I_kwDOCUB6oc7v-tM3
44,457
https://github.com/huggingface/transformers/issues/44457
https://api.github.com/repos/huggingface/transformers/issues/44457
LORA权重合并保存在本地之后重新加载出来,两个模型输出结果不一致
### System Info Debian GNU/Linux 12 (bookworm)+5090-32G ``` absl-py 2.3.1 accelerate 1.12.0 aiohappyeyeballs 2.6.1 aiohttp 3.13.2 aiosignal 1.4.0 altgraph 0.17.5 annotated-doc ...
closed
completed
false
1
[ "bug" ]
[]
2026-03-05T06:40:39Z
2026-04-12T08:13:59Z
2026-04-12T08:13:58Z
NONE
null
20260413T085906Z
2026-04-13T08:59:06Z
fish-kong
54,362,165
MDQ6VXNlcjU0MzYyMTY1
User
false
huggingface/transformers
4,026,426,872
I_kwDOCUB6oc7v_mX4
44,458
https://github.com/huggingface/transformers/issues/44458
https://api.github.com/repos/huggingface/transformers/issues/44458
Mllama compile failed after new attn mask
### System Info torch 2.10.0+cpu regression PR: #42848 ### Who can help? @vasqu ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (giv...
open
null
false
4
[ "bug" ]
[]
2026-03-05T07:33:29Z
2026-04-12T08:13:57Z
null
CONTRIBUTOR
null
20260413T085906Z
2026-04-13T08:59:06Z
jiqing-feng
107,918,818
U_kgDOBm614g
User
false
huggingface/transformers
4,027,812,864
I_kwDOCUB6oc7wE4wA
44,462
https://github.com/huggingface/transformers/issues/44462
https://api.github.com/repos/huggingface/transformers/issues/44462
AutoTokenizer ignores tokenizer.json from the repository
### System Info - `transformers` version: 5.3.0 - Python version: 3.10.12 - Huggingface_hub version: 1.5.0 ### Who can help? @ArthurZucker and @itazap ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GL...
closed
completed
false
3
[ "bug" ]
[]
2026-03-05T12:04:31Z
2026-03-20T12:37:12Z
2026-03-20T12:37:12Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
apaniukov
51,917,466
MDQ6VXNlcjUxOTE3NDY2
User
false
huggingface/transformers
4,028,028,902
I_kwDOCUB6oc7wFtfm
44,464
https://github.com/huggingface/transformers/issues/44464
https://api.github.com/repos/huggingface/transformers/issues/44464
Chunked generation produces inconsistent outputs when using compiled forward
### System Info - `transformers` version: 4.57.6 - Platform: Linux-6.18.13-arch1-1-x86_64-with-glibc2.43 - Python version: 3.12.12 - Huggingface_hub version: 0.36.0 - Safetensors version: 0.7.0 - Accelerate version: 1.12.0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (accelera...
closed
completed
false
2
[ "bug" ]
[]
2026-03-05T12:50:02Z
2026-03-06T16:50:02Z
2026-03-06T16:50:02Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
mrTsjolder
3,738,082
MDQ6VXNlcjM3MzgwODI=
User
false
huggingface/transformers
4,028,229,153
I_kwDOCUB6oc7wGeYh
44,466
https://github.com/huggingface/transformers/issues/44466
https://api.github.com/repos/huggingface/transformers/issues/44466
[v5] Inconsistent serialization of `lm_head.weight` (tied weights?) depending on model device in v5/`main`, while v4.57 behaves correctly
### System Info ``` - `transformers` version: 5.3.0.dev0 - Platform: Linux-6.8.0-100-generic-x86_64-with-glibc2.39 - Python version: 3.12.12 - Huggingface_hub version: 1.5.0 - Safetensors version: 0.7.0 - Accelerate version: 1.12.0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version ...
closed
completed
false
3
[ "bug" ]
[]
2026-03-05T13:26:59Z
2026-03-09T15:00:24Z
2026-03-09T15:00:24Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
fxmarty-amd
180,171,742
U_kgDOCr0z3g
User
false
huggingface/transformers
4,029,975,044
I_kwDOCUB6oc7wNIoE
44,479
https://github.com/huggingface/transformers/issues/44479
https://api.github.com/repos/huggingface/transformers/issues/44479
[`bug`] v5.3.0 video input regression for `qwen2_5_vl`, `qwen3_vl`, `qwen3_5`, and `qwen3_5_moe`
### System Info - `transformers` version: 5.3.0.dev0 - Platform: Windows-10-10.0.26200-SP0 - Python version: 3.11.6 - Huggingface_hub version: 1.5.0 - Safetensors version: 0.6.2 - Accelerate version: 1.13.0.dev0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (accelerator?): 2.10...
closed
completed
false
3
[ "bug" ]
[]
2026-03-05T18:47:53Z
2026-03-10T09:57:36Z
2026-03-10T09:57:35Z
MEMBER
null
20260325T173244Z
2026-03-25T17:32:44Z
tomaarsen
37,621,491
MDQ6VXNlcjM3NjIxNDkx
User
false
huggingface/transformers
4,031,866,225
I_kwDOCUB6oc7wUWVx
44,483
https://github.com/huggingface/transformers/issues/44483
https://api.github.com/repos/huggingface/transformers/issues/44483
[Critical v5.3] /v1/chat/completions would not accept request as usual
### System Info Linux ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Run `transfor...
closed
completed
false
4
[ "bug" ]
[ "LysandreJik" ]
2026-03-06T02:39:54Z
2026-03-16T05:26:19Z
2026-03-16T05:26:18Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
zhangwei217245
346,451
MDQ6VXNlcjM0NjQ1MQ==
User
false
huggingface/transformers
4,031,894,714
I_kwDOCUB6oc7wUdS6
44,484
https://github.com/huggingface/transformers/issues/44484
https://api.github.com/repos/huggingface/transformers/issues/44484
Why max_shard_size in PreTrainedModel.save_pretrained() is 50GB?
### System Info In old version like 4.57.1, max_shard_size in PreTrainedModel.save_pretrained() is 5GB, but in new version, max_shard_size is '50GB'. Is it normal? ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially suppor...
closed
completed
false
1
[ "bug" ]
[]
2026-03-06T02:51:48Z
2026-03-06T13:34:27Z
2026-03-06T13:34:27Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
Silencezong
92,312,888
U_kgDOBYCVOA
User
false
huggingface/transformers
4,032,430,401
I_kwDOCUB6oc7wWgFB
44,485
https://github.com/huggingface/transformers/issues/44485
https://api.github.com/repos/huggingface/transformers/issues/44485
[Bug/Discusion] GLM-5 RoPE Implementation
### System Info https://huggingface.co/zai-org/GLM-5/blob/main/config.json#L45 hi! I see that the rope setting here is rope_interleave true. However, looking at the transformers implementation, the logic here is false, refer to these https://github.com/huggingface/transformers/blob/d5e555a632682555332c3c8e938461efd...
closed
completed
false
3
[ "bug" ]
[]
2026-03-06T06:04:18Z
2026-04-13T08:37:44Z
2026-04-13T08:37:44Z
CONTRIBUTOR
null
20260413T085906Z
2026-04-13T08:59:06Z
Jintao-Huang
45,290,347
MDQ6VXNlcjQ1MjkwMzQ3
User
false
huggingface/transformers
4,032,643,658
I_kwDOCUB6oc7wXUJK
44,486
https://github.com/huggingface/transformers/issues/44486
https://api.github.com/repos/huggingface/transformers/issues/44486
KubeflowCallback: Native progress reporting for Kubernetes-based Kubeflow training
### Feature request Add a `KubeflowCallback` to enable automatic progress and metrics reporting for training jobs running on [Kubeflow Trainer](https://github.com/kubeflow/trainer), the Kubernetes-native platform for distributed AI/ML training. **Context:** This is part of a coordinated effort with the Kubeflow comm...
closed
completed
false
1
[ "Feature request" ]
[]
2026-03-06T07:07:19Z
2026-03-18T14:58:25Z
2026-03-18T14:58:25Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
abhijeet-dhumal
84,722,973
MDQ6VXNlcjg0NzIyOTcz
User
false
huggingface/transformers
4,032,973,273
I_kwDOCUB6oc7wYknZ
44,488
https://github.com/huggingface/transformers/issues/44488
https://api.github.com/repos/huggingface/transformers/issues/44488
Current version also does not load "cjvt/sleng-bert"
### System Info broken config: Python 3.13.5 tokenizers 0.22.2 transformers 5.2.0 torch 2.7.1+cu118 working config: Python 3.13.5 tokenizers 0.22.1 transformers 4.57.1 torch 2.8.0+cu129 ### Who can help? @ArthurZucker @Cyrilvallez ### Information - [x] The official example scripts - [ ] My own m...
closed
completed
false
14
[ "bug" ]
[]
2026-03-06T08:36:44Z
2026-03-24T10:05:50Z
2026-03-23T08:52:03Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
AngledLuffa
3,411,033
MDQ6VXNlcjM0MTEwMzM=
User
false
huggingface/transformers
4,033,842,729
I_kwDOCUB6oc7wb44p
44,492
https://github.com/huggingface/transformers/issues/44492
https://api.github.com/repos/huggingface/transformers/issues/44492
Typo in Cache strategies
### System Info I’m sorry if I am writing this issue in a wrong subsection but your chatbot let me here. The document Cache Strategies, https://huggingface.co/docs/transformers/kv_cache , says that the JIT maximizes latency while it should “minimizes” the latency . I basically just greaduated, start of my career so p...
closed
completed
false
1
[ "bug" ]
[]
2026-03-06T12:00:54Z
2026-03-06T20:05:38Z
2026-03-06T20:05:38Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
aryan221102
160,477,158
U_kgDOCZCv5g
User
false
huggingface/transformers
4,033,865,199
I_kwDOCUB6oc7wb-Xv
44,493
https://github.com/huggingface/transformers/issues/44493
https://api.github.com/repos/huggingface/transformers/issues/44493
Many models started showing UNEXPECTED Key with position id
### System Info - `transformers` version: 5.0.0 - Platform: Linux-6.6.113+-x86_64-with-glibc2.35 - Python version: 3.12.12 - Huggingface_hub version: 1.5.0 - Safetensors version: 0.7.0 - Accelerate version: 1.12.0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (accelerator?): 2....
closed
completed
false
2
[ "bug" ]
[]
2026-03-06T12:06:44Z
2026-03-18T18:30:50Z
2026-03-18T18:30:50Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
weathon
41,298,844
MDQ6VXNlcjQxMjk4ODQ0
User
false
huggingface/transformers
4,034,746,383
I_kwDOCUB6oc7wfVgP
44,496
https://github.com/huggingface/transformers/issues/44496
https://api.github.com/repos/huggingface/transformers/issues/44496
ValueError: Unrecognized model in allenai/Olmo-Hybrid-Instruct-SFT-7B. Should have a `model_type` key in its config.json.
### System Info - `transformers` version: 5.3.0.dev0 (main) - Platform: macOS-15.7.4-arm64-arm-64bit-Mach-O - Python version: 3.13.2 - Huggingface_hub version: 1.3.1 - Safetensors version: 0.5.3 - Accelerate version: 1.12.0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (acceler...
closed
completed
false
4
[ "bug" ]
[]
2026-03-06T15:17:24Z
2026-03-08T23:07:07Z
2026-03-08T23:07:06Z
MEMBER
null
20260325T173244Z
2026-03-25T17:32:44Z
xenova
26,504,141
MDQ6VXNlcjI2NTA0MTQx
User
false
huggingface/transformers
4,036,647,258
I_kwDOCUB6oc7wmlla
44,509
https://github.com/huggingface/transformers/issues/44509
https://api.github.com/repos/huggingface/transformers/issues/44509
Docs still mention text2text-generation / summarization / translation pipeline tasks which were removed in v5
### System Info Transformers version: 5.x The issue is related to the documentation rather than a specific runtime environment. ### Who can help? @Rocketknight1 @stevhliu ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [x] An officially supported task in the `exampl...
closed
completed
false
4
[ "bug" ]
[]
2026-03-06T23:34:44Z
2026-03-09T19:00:15Z
2026-03-09T19:00:15Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
math-hiyoko
56,009,584
MDQ6VXNlcjU2MDA5NTg0
User
false
huggingface/transformers
4,038,808,878
I_kwDOCUB6oc7wu1Uu
44,512
https://github.com/huggingface/transformers/issues/44512
https://api.github.com/repos/huggingface/transformers/issues/44512
Docs still mention transformers run command which was removed in v5
### System Info Transformers version: 5.x The issue is related to the documentation rather than a specific runtime environment. ### Who can help? @stevhliu ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (suc...
closed
completed
false
0
[ "bug" ]
[]
2026-03-07T16:10:16Z
2026-03-09T15:37:17Z
2026-03-09T15:37:17Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
math-hiyoko
56,009,584
MDQ6VXNlcjU2MDA5NTg0
User
false
huggingface/transformers
4,038,940,374
I_kwDOCUB6oc7wvVbW
44,514
https://github.com/huggingface/transformers/issues/44514
https://api.github.com/repos/huggingface/transformers/issues/44514
`Qwen2_5_VLProcessor.apply_chat_template` crashes on batched input when `padding=False`
### System Info transformers 5.3.0 ### Who can help? @zucchini-nlp ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ...
closed
completed
false
1
[ "bug" ]
[]
2026-03-07T17:03:20Z
2026-03-25T11:33:47Z
2026-03-25T11:33:47Z
MEMBER
null
20260325T173244Z
2026-03-25T17:32:44Z
qgallouedec
45,557,362
MDQ6VXNlcjQ1NTU3MzYy
User
false
huggingface/transformers
4,040,512,605
I_kwDOCUB6oc7w1VRd
44,521
https://github.com/huggingface/transformers/issues/44521
https://api.github.com/repos/huggingface/transformers/issues/44521
apply_chat_template returns all-zero assistant_masks for multimodal inputs
### System Info transformers==5.3.0 ### Who can help? @ArthurZucker and @itazap ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### R...
open
null
false
8
[ "bug" ]
[]
2026-03-08T05:03:12Z
2026-04-18T08:11:39Z
null
NONE
null
20260418T090534Z
2026-04-18T09:05:34Z
renhouxing
36,196,749
MDQ6VXNlcjM2MTk2NzQ5
User
false
huggingface/transformers
4,041,937,165
I_kwDOCUB6oc7w6xEN
44,530
https://github.com/huggingface/transformers/issues/44530
https://api.github.com/repos/huggingface/transformers/issues/44530
[CB] PagedAttentionCache crashes with "Invalid group type: linear_attention" on Qwen3.5 models
### System Info - `transformers` version: 5.2.0 - Platform: Windows-11-10.0.26200-SP0 - Python version: 3.13.3 - Huggingface_hub version: 1.5.0 - Safetensors version: 0.5.3 - Accelerate version: 1.12.0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (accelerator?): 2.10.0+cu128 (...
closed
completed
false
7
[ "bug", "Code agent slop" ]
[]
2026-03-08T18:49:55Z
2026-03-13T11:58:36Z
2026-03-10T22:42:08Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
mxchampagne
96,957,776
U_kgDOBcd1UA
User
false
huggingface/transformers
4,043,153,614
I_kwDOCUB6oc7w_aDO
44,534
https://github.com/huggingface/transformers/issues/44534
https://api.github.com/repos/huggingface/transformers/issues/44534
Transformers v5 fills non-persistent buffers with junk
### System Info - `transformers` version: 5.3.0 - Platform: Linux-6.17.0-14-generic-x86_64-with-glibc2.39 - Python version: 3.12.3 - Huggingface_hub version: 1.6.0 - Safetensors version: 0.7.0 - Accelerate version: not installed - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (accel...
closed
completed
false
2
[ "bug" ]
[]
2026-03-09T03:43:59Z
2026-03-09T12:56:33Z
2026-03-09T12:56:33Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
umarbutler
8,473,183
MDQ6VXNlcjg0NzMxODM=
User
false
huggingface/transformers
4,044,272,162
I_kwDOCUB6oc7xDrIi
44,537
https://github.com/huggingface/transformers/issues/44537
https://api.github.com/repos/huggingface/transformers/issues/44537
About check_model_inputs after version 5.2.0, Where is it!?
Hei bros, I found the func transformers.utils.generic.check_model_inputs in version <=5.1.0 turn to transformers.utils.generic.merge_with_config_defaults in version >=5.2.0, but I did not found any notes about it, or I am a blind person......TAT
closed
completed
false
2
[]
[]
2026-03-09T08:56:37Z
2026-03-10T02:15:57Z
2026-03-10T02:15:57Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
QYQTexas
169,536,866
U_kgDOChrtYg
User
false
huggingface/transformers
4,044,495,369
I_kwDOCUB6oc7xEhoJ
44,541
https://github.com/huggingface/transformers/issues/44541
https://api.github.com/repos/huggingface/transformers/issues/44541
Can not deploy SFT Qwen3.5-9B model
### System Info I SFT Qwen3.5-9B model with transformers==5.2.0 ### Reproduction When I try to deploy my model with vllm==0.17.0, it reports: TypeError: Invalid type of HuggingFace config. Expected type: <class 'vllm.transformers_utils.configs.qwen3_5.Qwen3_5Config'>, but found type: <class 'transformers.models.qwe...
closed
completed
false
17
[ "bug" ]
[]
2026-03-09T09:39:19Z
2026-04-19T01:15:17Z
2026-04-10T09:27:56Z
NONE
null
20260419T020535Z
2026-04-19T02:05:35Z
zouYC2021
66,997,535
MDQ6VXNlcjY2OTk3NTM1
User
false
huggingface/transformers
4,045,354,786
I_kwDOCUB6oc7xHzci
44,545
https://github.com/huggingface/transformers/issues/44545
https://api.github.com/repos/huggingface/transformers/issues/44545
Qwen2_5_VLProcessor.apply_chat_template crashes on batched input when padding=False
## Bug Description `Qwen2_5_VLProcessor.apply_chat_template` raises `ValueError: setting an array element with a sequence` when processing a batch of ≥2 conversations that include images, under the default `padding=False` setting. **Root cause:** `mm_token_type_ids` was built by calling `np.array(text_inputs["input_i...
closed
completed
false
2
[]
[]
2026-03-09T12:37:17Z
2026-03-25T11:33:46Z
2026-03-25T11:33:46Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
Anakintano
126,348,043
U_kgDOB4frCw
User
false
huggingface/transformers
4,048,763,527
I_kwDOCUB6oc7xUzqH
44,554
https://github.com/huggingface/transformers/issues/44554
https://api.github.com/repos/huggingface/transformers/issues/44554
[MPS] Upstream correctness issue in attention when value head dim differs from query
### System Info There is a correctness issue with attention in PyTorch when using the MPS backend with value head dims different from the query head (see https://github.com/pytorch/pytorch/issues/176767). Consider the following reproducer ```python import torch import torch.nn.functional as F q = torch.rand(1, 1, 8...
open
reopened
false
10
[ "WIP", "bug" ]
[]
2026-03-10T01:13:32Z
2026-04-17T11:56:26Z
null
CONTRIBUTOR
null
20260417T180542Z
2026-04-17T18:05:42Z
hvaara
1,535,968
MDQ6VXNlcjE1MzU5Njg=
User
false
huggingface/transformers
4,049,397,875
I_kwDOCUB6oc7xXOhz
44,556
https://github.com/huggingface/transformers/issues/44556
https://api.github.com/repos/huggingface/transformers/issues/44556
Loading checkpoint trained on v4.57 cannot be reload after upgraded to v5.2 & v5.3
### System Info I have trained models using Qwen3 using v4.57 but the ckpt loading will hang forever after Loading weights: 100%|██████████| 708/708 [00:09<00:00, 78.04it/s] . ### Who can help? _No response_ ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An offic...
closed
completed
false
2
[ "bug" ]
[]
2026-03-10T04:56:49Z
2026-03-11T20:27:04Z
2026-03-11T20:27:04Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
ihungalexhsu
8,895,697
MDQ6VXNlcjg4OTU2OTc=
User
false
huggingface/transformers
4,050,043,835
I_kwDOCUB6oc7xZsO7
44,559
https://github.com/huggingface/transformers/issues/44559
https://api.github.com/repos/huggingface/transformers/issues/44559
flash-attn-4 (flash_attn.cute) is not supported by attn_implementation="flash_attention_2"
### Feature request # Support `flash-attn-4` (`flash_attn.cute`) in Transformers attention backend selection ## System Info - `transformers==5.3.0` - `torch==2.10.0+cu128` - `flash-attn-4==4.0.0b4` - `accelerate==1.13.0` - `trl==0.29.0` - `peft==0.18.0` - `deepspeed==0.18.7` - `tokenizers==0.22.2` - `huggingface_hub=...
closed
completed
false
2
[ "Feature request" ]
[]
2026-03-10T07:47:09Z
2026-03-11T06:53:38Z
2026-03-11T06:53:38Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
DimensionSTP
65,501,090
MDQ6VXNlcjY1NTAxMDkw
User
false
huggingface/transformers
4,050,236,929
I_kwDOCUB6oc7xabYB
44,560
https://github.com/huggingface/transformers/issues/44560
https://api.github.com/repos/huggingface/transformers/issues/44560
Qwen3-vl-embedding Video Error "StopIteration" in transformers 5.3.0
### System Info transformers version: 5.3.0 Platform: Windows11, WSL2, uv, vscode Python 3.12.13 (main, Mar 3 2026, 14:59:34) [Clang 21.1.4 ] on linux ### Who can help? @zucchini-nlp ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in...
closed
completed
false
3
[ "bug" ]
[]
2026-03-10T08:30:00Z
2026-03-10T10:15:25Z
2026-03-10T09:57:21Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
QYQTexas
169,536,866
U_kgDOChrtYg
User
false
huggingface/transformers
4,050,489,633
I_kwDOCUB6oc7xbZEh
44,561
https://github.com/huggingface/transformers/issues/44561
https://api.github.com/repos/huggingface/transformers/issues/44561
Removal of `is_torch_fx_available` in v5.0 breaks `trust_remote_code` models
### System Info - `transformers` version: 5.0.0 - PyTorch version: 2.10.0 - Python version: 3.12 ### Who can help? @ArthurZucker @Rocketknight1 ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder - [x] My own task ...
closed
not_planned
false
1
[]
[]
2026-03-10T09:22:21Z
2026-03-25T17:48:04Z
2026-03-25T17:48:04Z
NONE
null
20260325T200011Z
2026-03-25T20:00:11Z
janbernloehr
1,050,099
MDQ6VXNlcjEwNTAwOTk=
User
false
huggingface/transformers
4,051,220,122
I_kwDOCUB6oc7xeLaa
44,568
https://github.com/huggingface/transformers/issues/44568
https://api.github.com/repos/huggingface/transformers/issues/44568
[BUG] add_special_tokens=True doesn't add BOS/EOS tokens for microsoft/mdeberta-v3-base tokenizer in transformers >=5.0
### System Info ## Version Details - Working version: transformers==4.48.0 - Broken versions: transformers==5.0.0, 5.1.0, 5.2.0, 5.3.0 ## Environment - transformers: 5.2.0 - tokenizers: 0.22.2 - Python: 3.12 - Platform: Linux ### Who can help? @ArthurZucker and @itazap ### Information - [ ] The official example sc...
closed
completed
false
0
[ "bug" ]
[]
2026-03-10T11:43:59Z
2026-03-24T09:40:46Z
2026-03-24T09:40:46Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
Abdullahaml1
25,988,048
MDQ6VXNlcjI1OTg4MDQ4
User
false
huggingface/transformers
4,052,196,712
I_kwDOCUB6oc7xh51o
44,572
https://github.com/huggingface/transformers/issues/44572
https://api.github.com/repos/huggingface/transformers/issues/44572
<spam>
<spam>
closed
completed
false
0
[]
[]
2026-03-10T14:28:30Z
2026-03-11T13:20:33Z
2026-03-11T13:20:19Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
aiqing20230305-bot
257,078,371
U_kgDOD1K0Yw
User
false
huggingface/transformers
4,052,252,831
I_kwDOCUB6oc7xiHif
44,573
https://github.com/huggingface/transformers/issues/44573
https://api.github.com/repos/huggingface/transformers/issues/44573
<spam>
<spam>
closed
completed
false
0
[]
[]
2026-03-10T14:37:42Z
2026-03-11T13:20:49Z
2026-03-11T13:20:37Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
aiqing20230305-bot
257,078,371
U_kgDOD1K0Yw
User
false
huggingface/transformers
4,052,267,788
I_kwDOCUB6oc7xiLMM
44,574
https://github.com/huggingface/transformers/issues/44574
https://api.github.com/repos/huggingface/transformers/issues/44574
Title: Clarification and consistency of "Transformers" terminology in README
<img width="1258" height="366" alt="Image" src="https://github.com/user-attachments/assets/b71a8eca-da64-4464-a28e-452dc21ca569" /> <img width="1434" height="399" alt="Image" src="https://github.com/user-attachments/assets/f324f04f-abbc-4bec-92a9-9ffde2f42034" /> While reading the **README as a new user**, I found th...
closed
completed
false
2
[]
[]
2026-03-10T14:40:08Z
2026-03-12T04:38:08Z
2026-03-11T13:24:42Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
rabbierabbie
161,629,097
U_kgDOCaJDqQ
User
false
huggingface/transformers
4,055,271,864
I_kwDOCUB6oc7xtom4
44,589
https://github.com/huggingface/transformers/issues/44589
https://api.github.com/repos/huggingface/transformers/issues/44589
TypeError: couldn't find storage object Float8_e4m3fnStorage
### System Info - `transformers` version: 5.3.0.dev0 - Platform: Linux-6.8.0-101-generic-x86_64-with-glibc2.35 - Python version: 3.12.13 - Huggingface_hub version: 1.6.0 - Safetensors version: 0.7.0 - Accelerate version: 1.13.0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (acc...
closed
completed
false
6
[ "bug" ]
[]
2026-03-11T02:16:53Z
2026-03-20T13:12:32Z
2026-03-20T13:12:32Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
xin3he
83,260,933
MDQ6VXNlcjgzMjYwOTMz
User
false
huggingface/transformers
4,057,458,915
I_kwDOCUB6oc7x1-jj
44,593
https://github.com/huggingface/transformers/issues/44593
https://api.github.com/repos/huggingface/transformers/issues/44593
Support for sequence-level custom metrics with decoder-only models
### Feature request Hi Hugging Face team, I’m trying to compute custom metrics at the sequence level for a decoder-only Transformer model, but I ran into an issue. The Seq2SeqTrainer class provides the predict_with_generate option, but it is primarily designed for encoder-decoder architectures. As a result, using it ...
open
null
false
6
[ "Feature request" ]
[]
2026-03-11T10:51:47Z
2026-03-13T11:57:16Z
null
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
l-k-11235
57,141,057
MDQ6VXNlcjU3MTQxMDU3
User
false
huggingface/transformers
4,060,145,187
I_kwDOCUB6oc7yAOYj
44,609
https://github.com/huggingface/transformers/issues/44609
https://api.github.com/repos/huggingface/transformers/issues/44609
StaticSlidingWindowLayer triggers torch.compile/dynamo recompilations every decode step
### Feature request ### Summary Could there be added support for StaticSlidingWindowLayer to be fully compatible with torch.compile() by avoiding it's dynamic control flow? When using a StaticCache with StaticSlidingWindowLayer (e.g. GPT-OSS, Mistral) and torch.compile(), the compiled graph recompiles on every deco...
closed
completed
false
5
[ "Feature request" ]
[]
2026-03-11T19:15:20Z
2026-03-12T13:03:20Z
2026-03-12T13:01:42Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
jazpurTT
200,671,331
U_kgDOC_YAYw
User
false
huggingface/transformers
4,060,354,689
I_kwDOCUB6oc7yBBiB
44,610
https://github.com/huggingface/transformers/issues/44610
https://api.github.com/repos/huggingface/transformers/issues/44610
[BUG] OmDet-Turbo processor produces 640px inputs but the model expects 224px
### System Info * `transformers` version: `5.0.0.dev0` * Platform: `Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39` * Python version: `3.12.3` * `huggingface_hub` version: `1.3.2` * `safetensors` version: `0.7.0` * `accelerate` version: `1.12.0` * Accelerate config: `not installed` * DeepSpeed version:...
closed
completed
false
0
[ "bug" ]
[]
2026-03-11T19:58:13Z
2026-04-18T09:07:17Z
2026-03-13T11:55:55Z
CONTRIBUTOR
null
20260418T100536Z
2026-04-18T10:05:36Z
harshaljanjani
75,426,551
MDQ6VXNlcjc1NDI2NTUx
User
false
huggingface/transformers
4,061,867,605
I_kwDOCUB6oc7yGy5V
44,617
https://github.com/huggingface/transformers/issues/44617
https://api.github.com/repos/huggingface/transformers/issues/44617
Sam3Video: CUDA out of memory
### System Info transformers 5.3.0 Python 3.10.12 torch 2.4.0+cu124 Tracking multiple targets simultaneously, typically numbering in the dozens, results in out of memory. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An official...
open
null
false
3
[ "bug" ]
[]
2026-03-12T03:29:05Z
2026-03-27T10:37:08Z
null
NONE
null
20260407T090028Z
2026-04-07T09:00:28Z
middleknight
39,006,812
MDQ6VXNlcjM5MDA2ODEy
User
false
huggingface/transformers
4,062,402,313
I_kwDOCUB6oc7yI1cJ
44,619
https://github.com/huggingface/transformers/issues/44619
https://api.github.com/repos/huggingface/transformers/issues/44619
Plug model rule in development flow and extend it
This is a follow-up from https://github.com/huggingface/transformers/pull/44174 We're now plugging the tool into developer flow: - add an opt-in github hook for checking the model - automatically run `make check-model-rules` on PRs to generate reports - add a ref to the CLI in the AI agents files - fix models that ar...
open
null
false
1
[]
[ "tarekziade" ]
2026-03-12T06:00:58Z
2026-04-14T08:29:54Z
null
MEMBER
null
20260414T122001Z
2026-04-14T12:20:01Z
tarekziade
250,019
MDQ6VXNlcjI1MDAxOQ==
User
false
huggingface/transformers
4,063,277,021
I_kwDOCUB6oc7yMK_d
44,623
https://github.com/huggingface/transformers/issues/44623
https://api.github.com/repos/huggingface/transformers/issues/44623
[BUG] processor.save_pretrained(...) missing files
### System Info transformers 4.57.6 <img width="688" height="176" alt="Image" src="https://github.com/user-attachments/assets/7106b5ce-6ef8-4bb6-ba7a-889821d02f8f" /> transformers 5.3.0 <img width="664" height="101" alt="Image" src="https://github.com/user-attachments/assets/b54d5186-280b-405e-b0bb-4caee99f2a11" /...
closed
completed
false
2
[ "bug" ]
[]
2026-03-12T09:20:10Z
2026-03-13T12:04:23Z
2026-03-13T12:04:23Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
Jintao-Huang
45,290,347
MDQ6VXNlcjQ1MjkwMzQ3
User
false
huggingface/transformers
4,063,886,834
I_kwDOCUB6oc7yOf3y
44,625
https://github.com/huggingface/transformers/issues/44625
https://api.github.com/repos/huggingface/transformers/issues/44625
Qwen3.5 `num_labels` not propagated from core config to text config
### System Info - `transformers` version: 5.3.0.dev0 - Platform: Windows-10-10.0.26200-SP0 - Python version: 3.11.6 - Huggingface_hub version: 1.6.0 - Safetensors version: 0.6.2 - Accelerate version: 1.13.0.dev0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (accelerator?): 2.10...
open
null
false
3
[ "bug" ]
[]
2026-03-12T11:09:38Z
2026-04-08T12:22:10Z
null
MEMBER
null
20260411T144729Z
2026-04-11T14:47:29Z
tomaarsen
37,621,491
MDQ6VXNlcjM3NjIxNDkx
User
false
huggingface/transformers
4,065,597,475
I_kwDOCUB6oc7yVBgj
44,637
https://github.com/huggingface/transformers/issues/44637
https://api.github.com/repos/huggingface/transformers/issues/44637
load_best_model_at_end reloads PEFT adapter weights onto CUDA and can OOM under low remaining GPU memory
## System Info - `transformers` version: local current checkout (5.3.0.dev0) - Python: `3.12.12` - PyTorch: `2.10.0+cu128` - CUDA available: `True` - CUDA device count: `8` - `torchvision`: `0.25.0+cu128` - `Pillow`: `12.1.1` - PEFT: `0.18.1` I can also provide the full `transformers env` output if needed. ## Who ca...
open
null
false
7
[]
[]
2026-03-12T15:47:16Z
2026-04-02T14:14:48Z
null
NONE
null
20260407T090028Z
2026-04-07T09:00:28Z
DogWala
111,959,103
U_kgDOBqxcPw
User
false
huggingface/transformers
4,067,983,074
I_kwDOCUB6oc7yeH7i
44,643
https://github.com/huggingface/transformers/issues/44643
https://api.github.com/repos/huggingface/transformers/issues/44643
Qwen3.5 + `flash_attention_2` crashes: 3D M-RoPE position_ids leak to `_is_packed_sequence`
## Qwen3.5 + `flash_attention_2` crashes: 3D M-RoPE position_ids leak to `_is_packed_sequence` ### System Info - `transformers`: 5.3.0, PyTorch: 2.6.0+cu124, flash-attn: 2.8.3, Python: 3.10, Linux ### Reproduction Fine-tuning `Qwen3.5-9B` with `attn_implementation="flash_attention_2"` crashes with `CUDA error: an i...
closed
completed
false
2
[ "bug" ]
[]
2026-03-13T00:11:25Z
2026-03-13T11:21:05Z
2026-03-13T11:21:05Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
ritwickchaudhry
11,583,361
MDQ6VXNlcjExNTgzMzYx
User
false
huggingface/transformers
4,069,534,970
I_kwDOCUB6oc7ykCz6
44,655
https://github.com/huggingface/transformers/issues/44655
https://api.github.com/repos/huggingface/transformers/issues/44655
Unable to save Pipeline objects with save_pretrained.
### System Info **transformers**: 5.3.0, **python**: 3.13.12, **OS**: macOS 15.6 ### Who can help? @Rocketknight1 ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or ...
closed
completed
false
4
[ "bug" ]
[]
2026-03-13T07:53:25Z
2026-03-13T14:08:01Z
2026-03-13T14:08:01Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
VishnuHaasan
66,861,385
MDQ6VXNlcjY2ODYxMzg1
User
false
huggingface/transformers
4,070,986,225
I_kwDOCUB6oc7yplHx
44,661
https://github.com/huggingface/transformers/issues/44661
https://api.github.com/repos/huggingface/transformers/issues/44661
`add-new-model-like` fails if model is inside the `TOKENIZER_MAPPING_NAMES`
### System Info - `transformers` version: 5.3.0.dev0 - Platform: Linux-6.14.0-1013-nvidia-aarch64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 1.6.0 - Safetensors version: 0.7.0 - Accelerate version: 1.14.0.dev0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version...
closed
completed
false
7
[ "bug" ]
[]
2026-03-13T13:00:26Z
2026-03-13T17:13:29Z
2026-03-13T17:13:29Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
michalrzak
66,417,283
MDQ6VXNlcjY2NDE3Mjgz
User
false
huggingface/transformers
4,071,819,790
I_kwDOCUB6oc7yswoO
44,671
https://github.com/huggingface/transformers/issues/44671
https://api.github.com/repos/huggingface/transformers/issues/44671
CamemBERT produces incorrect masked LM predictions in v5
### System Info - `transformers` version: 5.3.0 - Platform: Linux-6.6.113+-x86_64-with-glibc2.35 - Python version: 3.12.12 - Huggingface_hub version: 1.6.0 - Safetensors version: 0.7.0 - Accelerate version: 1.13.0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (accelerator?): 2.10...
closed
completed
false
5
[ "bug" ]
[]
2026-03-13T15:30:48Z
2026-03-23T10:47:50Z
2026-03-23T10:47:50Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
math-hiyoko
56,009,584
MDQ6VXNlcjU2MDA5NTg0
User
false
huggingface/transformers
4,072,494,799
I_kwDOCUB6oc7yvVbP
44,677
https://github.com/huggingface/transformers/issues/44677
https://api.github.com/repos/huggingface/transformers/issues/44677
Add base_model_tp_plan to OlmoeConfig
## Description `OlmoeConfig` is missing a `base_model_tp_plan` class attribute, which means `from_pretrained(tp_plan="auto")` does not work for OLMoE models. Other MoE models like Qwen3-MoE already have this. OLMoE needs its own plan with a key difference: `q_norm` and `k_norm` must use `"colwise"` (not `"replicated_...
closed
completed
false
0
[]
[]
2026-03-13T17:37:56Z
2026-03-26T13:58:59Z
2026-03-26T13:58:59Z
MEMBER
null
20260407T090028Z
2026-04-07T09:00:28Z
dacorvo
1,910,518
MDQ6VXNlcjE5MTA1MTg=
User
false
huggingface/transformers
4,072,495,180
I_kwDOCUB6oc7yvVhM
44,678
https://github.com/huggingface/transformers/issues/44678
https://api.github.com/repos/huggingface/transformers/issues/44678
Use index_select instead of fancy indexing in batched_mm_experts_forward
## Description `batched_mm_experts_forward` in `transformers/integrations/moe.py` uses fancy indexing (`self.gate_up_proj[expert_ids]`) to select expert weights and biases. While semantically correct, fancy indexing is ambiguous for some compiler backends — it could be interpreted as gather, advanced indexing, or slic...
closed
completed
false
1
[]
[]
2026-03-13T17:38:02Z
2026-03-19T12:13:33Z
2026-03-19T12:13:33Z
MEMBER
null
20260325T173244Z
2026-03-25T17:32:44Z
dacorvo
1,910,518
MDQ6VXNlcjE5MTA1MTg=
User
false
huggingface/transformers
4,072,495,877
I_kwDOCUB6oc7yvVsF
44,679
https://github.com/huggingface/transformers/issues/44679
https://api.github.com/repos/huggingface/transformers/issues/44679
Allow kernel modules to declare their preferred mask function
## Description `load_and_register_attn_kernel` in `transformers/integrations/hub_kernels.py` hardcodes `flash_attention_2` as the mask function for all custom attention kernels: ```python ALL_MASK_ATTENTION_FUNCTIONS.register(attn_implementation, ALL_MASK_ATTENTION_FUNCTIONS["flash_attention_2"]) ``` This means cust...
open
null
false
1
[]
[]
2026-03-13T17:38:08Z
2026-04-13T08:37:38Z
null
MEMBER
null
20260413T085906Z
2026-04-13T08:59:06Z
dacorvo
1,910,518
MDQ6VXNlcjE5MTA1MTg=
User
false
huggingface/transformers
4,073,182,351
I_kwDOCUB6oc7yx9SP
44,683
https://github.com/huggingface/transformers/issues/44683
https://api.github.com/repos/huggingface/transformers/issues/44683
Compiled flex_attention fails on torch >= 2.9
### System Info All recent transformers versions -- impacts torch >= 2.9 ### Who can help? @vasqu @ArthurZucker @Cyrilvallez ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My ow...
closed
completed
false
0
[ "bug" ]
[]
2026-03-13T20:09:58Z
2026-03-18T11:44:20Z
2026-03-18T11:44:20Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
ntenenz
8,411,908
MDQ6VXNlcjg0MTE5MDg=
User
false
huggingface/transformers
4,075,806,271
I_kwDOCUB6oc7y794_
44,701
https://github.com/huggingface/transformers/issues/44701
https://api.github.com/repos/huggingface/transformers/issues/44701
Example: Handling imbalanced text classification with F1-score evaluation using Trainer API
Many real-world NLP classification tasks have imbalanced label distributions. However, most example scripts in the repository evaluate models primarily using accuracy. Accuracy can be misleading for imbalanced datasets, and metrics such as F1-score or balanced accuracy are often more appropriate. I would like to cont...
closed
completed
false
1
[ "Code agent slop" ]
[]
2026-03-14T13:16:42Z
2026-03-18T16:12:21Z
2026-03-18T12:59:40Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
MdSaifAli786123
136,231,572
U_kgDOCB66lA
User
false
huggingface/transformers
4,075,957,274
I_kwDOCUB6oc7y8iwa
44,704
https://github.com/huggingface/transformers/issues/44704
https://api.github.com/repos/huggingface/transformers/issues/44704
AutoProcessor.from_pretrained not passing all kwargs to cached_file
Hi, I believe `AutoProcessor.from_pretrained` is not forwarding arguments correctly to `cached_file`. The `cached_file` function is [defined with `**kwargs`](https://github.com/huggingface/transformers/blob/main/src/transformers/utils/hub.py#L225). However, `AutoProcessor.from_pretrained` filters the provided kwargs...
closed
completed
false
3
[ "bug" ]
[]
2026-03-14T14:46:06Z
2026-03-25T18:04:24Z
2026-03-25T18:04:24Z
NONE
null
20260325T200011Z
2026-03-25T20:00:11Z
peacefulotter
32,218,033
MDQ6VXNlcjMyMjE4MDMz
User
false
huggingface/transformers
4,076,931,510
I_kwDOCUB6oc7zAQm2
44,716
https://github.com/huggingface/transformers/issues/44716
https://api.github.com/repos/huggingface/transformers/issues/44716
`PixioPatchEmbeddings.forward` supports `interpolate_pos_encoding` but it is not propagated through `PixioEmbeddings`/`PixioModel`
The `PixioPatchEmbeddings` module defines an `interpolate_pos_encoding` argument in its forward method, but this argument is never propagated from higher-level modules. Current call chain: `PixioPatchEmbeddings` → `PixioEmbeddings` → `PixioModel` Since `PixioEmbeddings` and `PixioModel` do not pass or expose the `int...
open
null
false
1
[]
[]
2026-03-14T22:41:54Z
2026-03-27T02:44:21Z
null
CONTRIBUTOR
null
20260407T090028Z
2026-04-07T09:00:28Z
audioXD
8,224,015
MDQ6VXNlcjgyMjQwMTU=
User
false
huggingface/transformers
4,077,031,787
I_kwDOCUB6oc7zApFr
44,717
https://github.com/huggingface/transformers/issues/44717
https://api.github.com/repos/huggingface/transformers/issues/44717
Support packed sequences for linear attention models (i.e. Qwen3.5)
### Feature request Currently, packing does not seem supported for text-based datasets (https://github.com/unslothai/unsloth/issues/4160). It would be good to support this. ### Motivation Without packing, my training runs are approximately 3-5x more expensive with the dataset that I'd like to use, and also suffer fr...
open
null
false
14
[ "Feature request" ]
[]
2026-03-14T23:22:19Z
2026-03-26T21:04:34Z
null
NONE
null
20260407T090028Z
2026-04-07T09:00:28Z
kirawi
67,773,714
MDQ6VXNlcjY3NzczNzE0
User
false
huggingface/transformers
4,080,096,496
I_kwDOCUB6oc7zMVTw
44,734
https://github.com/huggingface/transformers/issues/44734
https://api.github.com/repos/huggingface/transformers/issues/44734
`transformers serve`: /v1/responses crashes on KV cache continuation due to wrong tensor indexing
### System Info - transformers version: 5.3.0 (latest main) - Platform: Linux / macOS - Python version: 3.12 ### Who can help? @LysandreJik @Rocketknight1 ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such a...
closed
completed
false
1
[]
[]
2026-03-16T04:08:09Z
2026-03-16T15:28:01Z
2026-03-16T15:28:01Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
mango766
85,663,565
MDQ6VXNlcjg1NjYzNTY1
User
false
huggingface/transformers
4,080,523,655
I_kwDOCUB6oc7zN9mH
44,737
https://github.com/huggingface/transformers/issues/44737
https://api.github.com/repos/huggingface/transformers/issues/44737
XLNet: relative_positional_encoding computes on CPU every forward pass (missing device= in torch.arange)
### System Info - `transformers` version: confirmed on 4.30.2, still present on `main` as of 2026-03-16 - Platform: Linux (tested on NVIDIA/AMD GPUs) - PyTorch: 2.x - Python: 3.10+ ### Who can help? @ArthurZucker @Rocketknight1 ### Information - [x] The official example scripts - [x] My own modified scripts ### T...
closed
completed
false
3
[ "bug" ]
[]
2026-03-16T06:30:52Z
2026-03-19T13:30:49Z
2026-03-19T13:30:49Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
mvstrauss
38,297,631
MDQ6VXNlcjM4Mjk3NjMx
User
false
huggingface/transformers
4,080,877,033
I_kwDOCUB6oc7zPT3p
44,740
https://github.com/huggingface/transformers/issues/44740
https://api.github.com/repos/huggingface/transformers/issues/44740
Run model linter on new models additions
- detect PRs with new models - add a `new model` label - run the linter and when failing comment the PR with errors
open
null
false
2
[]
[ "tarekziade" ]
2026-03-16T08:04:16Z
2026-04-17T08:29:51Z
null
MEMBER
null
20260417T180542Z
2026-04-17T18:05:42Z
tarekziade
250,019
MDQ6VXNlcjI1MDAxOQ==
User
false
huggingface/transformers
4,081,037,683
I_kwDOCUB6oc7zP7Fz
44,741
https://github.com/huggingface/transformers/issues/44741
https://api.github.com/repos/huggingface/transformers/issues/44741
[Neuron] Improve transformers compatibility with AWS Neuron devices
## Context AWS Neuron devices (Trainium/Inferentia) compile a separate **NEFF** (Neuron Executable File Format) for every unique tensor shape. Any code path that changes tensor shapes between iterations — growing masks, `torch.cat` on outputs, variable padding — triggers expensive recompilations (2–60s per NEFF depend...
closed
completed
false
2
[]
[]
2026-03-16T08:38:23Z
2026-04-17T14:02:28Z
2026-04-17T14:02:28Z
MEMBER
null
20260417T180542Z
2026-04-17T18:05:42Z
dacorvo
1,910,518
MDQ6VXNlcjE5MTA1MTg=
User
false
huggingface/transformers
4,081,060,880
I_kwDOCUB6oc7zQAwQ
44,742
https://github.com/huggingface/transformers/issues/44742
https://api.github.com/repos/huggingface/transformers/issues/44742
[Neuron] Static-shape generation loop for compilation-friendly inference
## Context `GenerationMixin._sample` grows `input_ids`, `attention_mask`, and `position_ids` via `torch.cat` on every decode step. This is problematic for any backend where dynamic tensor shapes carry a cost: * **XLA/torch.compile backends:** Static shapes are required for graph caching — dynamic shapes cause retraci...
open
null
false
3
[]
[]
2026-03-16T08:43:33Z
2026-04-17T13:47:38Z
null
MEMBER
null
20260417T180542Z
2026-04-17T18:05:42Z
dacorvo
1,910,518
MDQ6VXNlcjE5MTA1MTg=
User
false
huggingface/transformers
4,081,078,375
I_kwDOCUB6oc7zQFBn
44,743
https://github.com/huggingface/transformers/issues/44743
https://api.github.com/repos/huggingface/transformers/issues/44743
transformers modular_qwen3_5.py: Recurrent states always reset when using cache and seq_len>1
### System Info - `transformers` version: 5.3.0 - Platform: Linux-5.15.0-171-generic-x86_64-with-glibc2.35 - Python version: 3.12.11 - Huggingface_hub version: 1.6.0 - Safetensors version: 0.7.0 - Accelerate version: 1.13.0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (acceler...
closed
completed
false
2
[ "bug" ]
[]
2026-03-16T08:47:36Z
2026-03-17T06:34:36Z
2026-03-16T09:46:10Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
yansongzhou
237,394,923
U_kgDODiZb6w
User
false
huggingface/transformers
4,081,249,005
I_kwDOCUB6oc7zQurt
44,744
https://github.com/huggingface/transformers/issues/44744
https://api.github.com/repos/huggingface/transformers/issues/44744
推理模型的缺陷
这种架构的推理模型无法解决非线性推理模型下的问题。 例如:1+2,2+1=6,推理模型不会把目标设置成如何获得6缺失了什么条件,而是基于训练数据在疯狂地做小学数学。 第二个共性特征解析:在开发ASR,TTS的编程任务中,推理模型只会质疑代码文件历史中的潜在问题,并不会察觉到数据流中隐藏的线索是反向演绎法,和诊断问题的最短路径是:文本A→TTS→Wav→ASR→文本A。 如果推理模型没有实质性地研发进展,那我们只是再次造了一个用于搜索向量数据库的Google.甚至都没有Google提供的情报有参考性。 此架构下的数据库收录了大多数人类意识资产,却没有真正还原人类推理模型的一半能力。
closed
completed
false
0
[]
[]
2026-03-16T09:22:56Z
2026-03-18T15:11:18Z
2026-03-18T15:11:18Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
QuickerStudio
26,364,832
MDQ6VXNlcjI2MzY0ODMy
User
false
huggingface/transformers
4,081,508,778
I_kwDOCUB6oc7zRuGq
44,746
https://github.com/huggingface/transformers/issues/44746
https://api.github.com/repos/huggingface/transformers/issues/44746
最理想的Transformes二代生态环境的设想,100B的推理引擎模型会成为未来标准。
定律: 1.针对任务类型而优化的不到100B的“闭源”推理模型采用二进制加密部署到客户本地,后来以客户为中心形成的云计算生态将大幅度平摊算力和能源局限性。(增量更新型的大模型才是未来标准,虽然现在看起来很荒谬,但是人类意识就是在不停的增量更新,直到事实合理) 2.公共开放式地增量向量数据库收录系统。(每人抱着有各种缺陷的向量数据库,不如动用所有人的力量去维护一个完美的向量数据库) 3.小而精准的本地资料库负责搜索,管理的“开源”模型。(简单推理需求本地化) 4.基于开源合作下的功能研发类型的软件生态环境诞生的商业经济。(企业研发的试错场) 进入这个时代的人们向量数据库共享,知识共享,大量的人才会有更多的精力去校准推理能力和应...
closed
completed
false
1
[]
[]
2026-03-16T10:08:38Z
2026-03-18T15:15:22Z
2026-03-18T15:15:22Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
QuickerStudio
26,364,832
MDQ6VXNlcjI2MzY0ODMy
User
false
huggingface/transformers
4,081,656,083
I_kwDOCUB6oc7zSSET
44,748
https://github.com/huggingface/transformers/issues/44748
https://api.github.com/repos/huggingface/transformers/issues/44748
[Neuron] Auto-select StaticCache when device is Neuron
## Context On Neuron devices, `StaticCache` is required for correct generation — dynamic tensor shapes trigger per-step recompilations. Currently, users must explicitly pass `past_key_values=StaticCache(...)` or `cache_implementation="static"` to `model.generate()`. Ideally, `model.generate()` should auto-select `Stat...
open
null
false
3
[]
[]
2026-03-16T10:38:15Z
2026-04-17T14:05:50Z
null
MEMBER
null
20260417T180542Z
2026-04-17T18:05:42Z
dacorvo
1,910,518
MDQ6VXNlcjE5MTA1MTg=
User
false
huggingface/transformers
4,081,994,049
I_kwDOCUB6oc7zTklB
44,749
https://github.com/huggingface/transformers/issues/44749
https://api.github.com/repos/huggingface/transformers/issues/44749
Transformer 从4.57.3 升级到5.3.0 后过滤数据时长变慢十倍以上
### System Info H20 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction 过滤函数实现:对应链接 ht...
open
null
false
9
[ "bug" ]
[]
2026-03-16T11:48:25Z
2026-04-18T08:11:32Z
null
NONE
null
20260418T090534Z
2026-04-18T09:05:34Z
chenjiaoAngel
38,650,344
MDQ6VXNlcjM4NjUwMzQ0
User
false
huggingface/transformers
4,082,686,570
I_kwDOCUB6oc7zWNpq
44,754
https://github.com/huggingface/transformers/issues/44754
https://api.github.com/repos/huggingface/transformers/issues/44754
I really need help regarding to meta device issue
I installed TRELLIS 2 and getting the error for meta device, I followed the instructions here for installing this release of transformers, but when I launch TRELLIS 2 I still get the same error. Can someone please help me Fix this step by step because I spend 5 days trying without any solution
open
null
false
2
[]
[]
2026-03-16T14:03:28Z
2026-04-17T08:29:48Z
null
NONE
null
20260417T180542Z
2026-04-17T18:05:42Z
Hany138078
230,026,885
U_kgDODbXuhQ
User
false
huggingface/transformers
4,082,832,596
I_kwDOCUB6oc7zWxTU
44,756
https://github.com/huggingface/transformers/issues/44756
https://api.github.com/repos/huggingface/transformers/issues/44756
Disable mmap on Strix Halo to avoid OOM
### System Info - `transformers` version: 5.3.0 - Platform: Linux-6.19.0-9-generic-x86_64-with-glibc2.42 - Python version: 3.13.12 - Huggingface_hub version: 1.7.1 - Safetensors version: 0.7.0 - Accelerate version: 1.13.0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (accelerator?...
open
null
false
2
[ "bug" ]
[]
2026-03-16T14:28:04Z
2026-04-11T00:52:38Z
null
CONTRIBUTOR
null
20260411T144729Z
2026-04-11T14:47:29Z
woct0rdho
23,053,399
MDQ6VXNlcjIzMDUzMzk5
User
false
huggingface/transformers
4,085,509,339
I_kwDOCUB6oc7zg-zb
44,779
https://github.com/huggingface/transformers/issues/44779
https://api.github.com/repos/huggingface/transformers/issues/44779
Deepseek tokenizer produces incorrect results as of v5 (works in v4)
### System Info - `transformers` version: 5.3.0 - Platform: Linux-6.6.113+-x86_64-with-glibc2.35 - Python version: 3.12.12 - Huggingface_hub version: 1.6.0 - Safetensors version: 0.7.0 - Accelerate version: 1.13.0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (accelerator?): 2.10...
closed
completed
false
4
[ "bug" ]
[]
2026-03-17T00:30:56Z
2026-03-18T21:35:14Z
2026-03-18T21:35:14Z
MEMBER
null
20260325T173244Z
2026-03-25T17:32:44Z
xenova
26,504,141
MDQ6VXNlcjI2NTA0MTQx
User
false
huggingface/transformers
4,087,676,990
I_kwDOCUB6oc7zpQA-
44,792
https://github.com/huggingface/transformers/issues/44792
https://api.github.com/repos/huggingface/transformers/issues/44792
Failed test case `test_model_generate_images` for janus model
### System Info - `transformers` version: 5.3.0.dev0 - Platform: Linux-5.4.292-1.el8.elrepo.x86_64-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 1.7.1 - Safetensors version: 0.5.3 - Accelerate version: 1.12.0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch ve...
closed
completed
false
3
[ "bug" ]
[]
2026-03-17T10:40:18Z
2026-04-17T08:32:33Z
2026-04-17T08:32:33Z
CONTRIBUTOR
null
20260417T180542Z
2026-04-17T18:05:42Z
kaixuanliu
13,268,042
MDQ6VXNlcjEzMjY4MDQy
User
false
huggingface/transformers
4,092,031,687
I_kwDOCUB6oc7z53LH
44,805
https://github.com/huggingface/transformers/issues/44805
https://api.github.com/repos/huggingface/transformers/issues/44805
IndexError: The shape of the mask [...] at index 0 does not match the shape of the indexed tensor [...] at index 0
### System Info ``` - `transformers` version: 5.3.0 - Platform: Linux-5.15.0-164-generic-x86_64-with-glibc2.35 - Python version: 3.12.13 - Huggingface_hub version: 1.7.1 - Safetensors version: 0.7.0 - Accelerate version: 1.13.0 - Accelerate config: not found - DeepSpeed version: 0.16.4 - PyTorch version (accelerat...
closed
completed
false
2
[ "bug" ]
[]
2026-03-18T00:58:28Z
2026-03-18T12:20:33Z
2026-03-18T12:20:33Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
akowalsk
49,454,006
MDQ6VXNlcjQ5NDU0MDA2
User
false
huggingface/transformers
4,093,210,823
I_kwDOCUB6oc7z-XDH
44,810
https://github.com/huggingface/transformers/issues/44810
https://api.github.com/repos/huggingface/transformers/issues/44810
Showcase / question: a board-proven offline language runtime on ESP32-C3, and whether some future language capability may move beyond general model definitions
Hi Transformers folks, I wanted to share ...
closed
completed
false
2
[]
[]
2026-03-18T07:09:16Z
2026-03-18T16:01:42Z
2026-03-18T15:47:59Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
Alpha-Guardian
203,005,953
U_kgDODBmgAQ
User
false
huggingface/transformers
4,093,576,102
I_kwDOCUB6oc7z_wOm
44,811
https://github.com/huggingface/transformers/issues/44811
https://api.github.com/repos/huggingface/transformers/issues/44811
Whisper processor.batch_decode() function ignoring skip_special_tokens params
### System Info - `transformers` version: 4.57.6 - Platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35 - Python version: 3.10.13 - Huggingface_hub version: 0.36.2 - Safetensors version: 0.7.0 - Accelerate version: 1.13.0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorc...
closed
completed
false
5
[ "bug" ]
[]
2026-03-18T08:30:17Z
2026-03-20T10:59:59Z
2026-03-20T10:59:59Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
cfasana
143,723,410
U_kgDOCJELkg
User
false
huggingface/transformers
4,094,388,173
I_kwDOCUB6oc70C2fN
44,821
https://github.com/huggingface/transformers/issues/44821
https://api.github.com/repos/huggingface/transformers/issues/44821
Unable to load `AutoImageProcessor` from URL
### System Info <details><summary>Versions</summary> <p> - `transformers` version: 5.3.0 - Platform: Linux-6.8.0-106-generic-x86_64-with-glibc2.39 - Python version: 3.12.3 - Huggingface_hub version: 1.7.1 - Safetensors version: 0.7.0 - Accelerate version: not installed - Accelerate config: not found - DeepSpeed versi...
closed
completed
false
6
[ "bug" ]
[]
2026-03-18T11:08:09Z
2026-03-23T13:13:38Z
2026-03-23T13:10:59Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
BSchilperoort
12,114,825
MDQ6VXNlcjEyMTE0ODI1
User
false
huggingface/transformers
4,095,309,295
I_kwDOCUB6oc70GXXv
44,829
https://github.com/huggingface/transformers/issues/44829
https://api.github.com/repos/huggingface/transformers/issues/44829
AutoModelForSequenceClassification with attn_implementation="flash_attention_3" causes degenerate training (loss increases, model predicts all-one-class)
### System Info When fine-tuning `Qwen3ForSequenceClassification` (loaded via `AutoModelForSequenceClassification`) with `attn_implementation="flash_attention_3"`, training completely fails: loss increases instead of decreasing, and the model collapses to predicting all examples as one class. Removing `attn_implementa...
open
null
false
2
[ "bug" ]
[]
2026-03-18T13:56:33Z
2026-04-18T08:11:30Z
null
NONE
null
20260418T090534Z
2026-04-18T09:05:34Z
Jantory
36,835,418
MDQ6VXNlcjM2ODM1NDE4
User
false
huggingface/transformers
621,683,208
MDU6SXNzdWU2MjE2ODMyMDg=
4,483
https://github.com/huggingface/transformers/issues/4483
https://api.github.com/repos/huggingface/transformers/issues/4483
Trying to add support for GPT2 as decoder in EncoderDecoder model
# 🚀 Feature request Hi, I am trying to add the option of using GPT2 as the decoder in the EncoderDecoder model, which only support ## Motivation For a generation problem, it usually better to use GPT2 as the decoder, over BERT. ## Your contribution I've made the following changes in `modeling_gpt2.py...
closed
completed
false
32
[ "Core: Encoder-Decoder", "Good First Issue" ]
[ "patrickvonplaten" ]
2020-05-20T11:24:44Z
2026-03-02T13:42:22Z
2026-03-02T13:42:22Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
dimi1357
22,443,447
MDQ6VXNlcjIyNDQzNDQ3
User
false
huggingface/transformers
4,098,706,305
I_kwDOCUB6oc70TUuB
44,840
https://github.com/huggingface/transformers/issues/44840
https://api.github.com/repos/huggingface/transformers/issues/44840
typo in code block in docs
typo in code block: https://github.com/huggingface/transformers/blob/16a5b0936dcaae6efbc03ab1a4fd98dc324bfb9e/docs/source/en/weightconverter.md?plain=1#L69
closed
completed
false
0
[]
[]
2026-03-19T01:46:22Z
2026-03-19T11:56:52Z
2026-03-19T11:56:52Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
zhulinchng
24,189,730
MDQ6VXNlcjI0MTg5NzMw
User
false
huggingface/transformers
4,098,836,913
I_kwDOCUB6oc70T0mx
44,841
https://github.com/huggingface/transformers/issues/44841
https://api.github.com/repos/huggingface/transformers/issues/44841
Processor fails for mistralai/Voxtral-Mini-3B-2507
### System Info I am trying to run inference using `mistralai/Voxtral-Mini-3B-2507` on an audio (`np.ndarray`). On loading the processor using `processor = transformers.AutoProcessor.from_pretrained(MODEL, trust_remote_code=True)`, I am getting the following error: ``` /usr/local/lib/python3.12/dist-packages/huggingfa...
closed
completed
false
3
[ "bug" ]
[]
2026-03-19T02:26:39Z
2026-03-24T10:07:36Z
2026-03-24T10:07:36Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
BhavyaShah1234
62,424,200
MDQ6VXNlcjYyNDI0MjAw
User
false
huggingface/transformers
4,099,433,723
I_kwDOCUB6oc70WGT7
44,843
https://github.com/huggingface/transformers/issues/44843
https://api.github.com/repos/huggingface/transformers/issues/44843
AutoTokenizer.from_pretrained calls model_info() unconditionally in _patch_mistral_regex, breaks HF_HUB_OFFLINE mode
### System Info - `transformers` version: 4.57.3 - `huggingface_hub` version: 0.36.2 - Python: 3.12 - OS: Linux (Ubuntu 24.04, inside NVIDIA container) ### Who can help? @ArthurZucker @itazap ### Regression introduced in PR #42389 (`[Mistral Tokenizers] Fix tokenizer detection`), included in v4.57.2 → v4.57.3. ##...
open
null
false
5
[]
[]
2026-03-19T05:36:56Z
2026-03-27T10:58:49Z
null
NONE
null
20260407T090028Z
2026-04-07T09:00:28Z
nv-yna
248,773,860
U_kgDODtP85A
User
false
huggingface/transformers
4,100,120,851
I_kwDOCUB6oc70YuET
44,849
https://github.com/huggingface/transformers/issues/44849
https://api.github.com/repos/huggingface/transformers/issues/44849
Transformers Qwen3.5 had a bug when set output_hidden_states=True
### System Info Version: 5.2.0 in qwen3.5 outputs = model_wrapper.generate(**inputs, output_hidden_states=True) outpus something like this: ``` ><|image_pad|><|image_pad|><|image_pad|><|image_pad|><|image_pad|><|image_pad|><|image_pad|><|image_pad|><|image_pad|><|image_pad|><|image_pad|><|image_pad|><|image_pad|><...
open
null
false
5
[ "bug" ]
[]
2026-03-19T08:27:35Z
2026-04-19T08:16:52Z
null
NONE
null
20260419T090534Z
2026-04-19T09:05:34Z
lucasjinreal
21,303,438
MDQ6VXNlcjIxMzAzNDM4
User
false
huggingface/transformers
4,100,959,949
I_kwDOCUB6oc70b67N
44,855
https://github.com/huggingface/transformers/issues/44855
https://api.github.com/repos/huggingface/transformers/issues/44855
IndentationError when importing DebertaV2Model on Python 3.13 - @torch.jit.script fails to parse function with comment between decorator and def
## Description Importing `DebertaV2Model` from `transformers` (or any library that depends on it, such as `gliner`) raises an `IndentationError` on Python 3.13. The error originates in `torch.jit.script` when it attempts to re-parse the source of a JIT-scripted function that has a comment placed between the `@torch.ji...
closed
completed
false
6
[ "bug" ]
[]
2026-03-19T11:07:31Z
2026-03-26T07:49:53Z
2026-03-25T13:39:56Z
NONE
null
20260326T102019Z
2026-03-26T10:20:19Z
MNIKIEMA
64,019,294
MDQ6VXNlcjY0MDE5Mjk0
User
false
huggingface/transformers
4,101,508,603
I_kwDOCUB6oc70eA37
44,857
https://github.com/huggingface/transformers/issues/44857
https://api.github.com/repos/huggingface/transformers/issues/44857
LwDetrImageLoss crashes when using float16 AMP and Cuda
### System Info - `transformers` version: 5.3.0.dev0 - Platform: Linux-6.6.87.2-microsoft-standard-WSL2-x86_64-with-glibc2.39 - Python version: 3.12.3 - Huggingface_hub version: 1.7.1 - Safetensors version: 0.7.0 - Accelerate version: 1.13.0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorc...
closed
completed
false
3
[ "bug" ]
[]
2026-03-19T12:56:22Z
2026-03-24T17:02:34Z
2026-03-24T17:02:34Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
m-matthias
16,415,097
MDQ6VXNlcjE2NDE1MDk3
User
false
huggingface/transformers
4,102,371,828
I_kwDOCUB6oc70hTn0
44,861
https://github.com/huggingface/transformers/issues/44861
https://api.github.com/repos/huggingface/transformers/issues/44861
_get_tied_weight_keys crashes with AttributeError when _tied_weights_keys is a list
### System Info - `transformers` version: 5.3.0 - Platform: Linux - Python version: 3.13 ### Who can help? @Cyrilvallez ### Information - [ ] The official example scripts - [x] My own modified scripts ### Reproduction Full finetune of `NVIDIA-Nemotron-3-Nano-4B` crashes at checkpoint save wi...
closed
completed
false
0
[]
[]
2026-03-19T15:13:12Z
2026-03-20T09:46:56Z
2026-03-20T09:46:56Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
gh-wf
111,619,017
U_kgDOBqcryQ
User
false
huggingface/transformers
4,102,382,384
I_kwDOCUB6oc70hWMw
44,863
https://github.com/huggingface/transformers/issues/44863
https://api.github.com/repos/huggingface/transformers/issues/44863
NemotronH implementation can't load NemotronH checkpoints!
### System Info ```console - `transformers` version: 5.3.0 - Platform: macOS-15.7.3-arm64-arm-64bit - Python version: 3.12.11 - Huggingface_hub version: 1.7.1 - Safetensors version: 0.7.0 - Accelerate version: not installed - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (accelerato...
open
null
false
7
[ "bug" ]
[]
2026-03-19T15:15:06Z
2026-03-29T18:49:56Z
null
NONE
null
20260407T090028Z
2026-04-07T09:00:28Z
elfprince13
2,703,145
MDQ6VXNlcjI3MDMxNDU=
User
false
huggingface/transformers
4,104,609,447
I_kwDOCUB6oc70p16n
44,868
https://github.com/huggingface/transformers/issues/44868
https://api.github.com/repos/huggingface/transformers/issues/44868
[rag-end2end-retriever] Broken Google Drive link for SQuAD dataset and hyperparameters
## Description The README for the `rag-end2end-retriever` research project contains a broken Google Drive link to the SQuAD training dataset, knowledge base, and hyperparameters used in the experiments. **Location:** https://github.com/huggingface/transformers-research-projects/tree/main/rag-end2end-retriever **B...
open
null
false
1
[]
[]
2026-03-19T22:42:17Z
2026-04-19T08:16:50Z
null
NONE
null
20260419T090534Z
2026-04-19T09:05:34Z
lmmanriquem
101,687,883
U_kgDOBg-iSw
User
false
huggingface/transformers
4,105,125,767
I_kwDOCUB6oc70rz-H
44,869
https://github.com/huggingface/transformers/issues/44869
https://api.github.com/repos/huggingface/transformers/issues/44869
Whisper word timestamp decode crashes on trailing replacement character at end of decoded token stream
### System Info ### System Info - OS: macOS - `transformers`: `5.3.0.dev0` - Model: `openai/whisper-medium.en` ### Reproduction I hit an `IndexError: string index out of range` in Whisper word-timestamp decoding and traced it to `src/transformers/models/whisper/tokenization_whisper.py`. The failing code path is in...
open
null
false
3
[ "bug" ]
[]
2026-03-20T01:25:49Z
2026-04-15T13:47:57Z
null
NONE
null
20260415T224019Z
2026-04-15T22:40:19Z
chromatic-descension
98,860,570
U_kgDOBeR-Gg
User
false
huggingface/transformers
4,105,674,824
I_kwDOCUB6oc70t6BI
44,871
https://github.com/huggingface/transformers/issues/44871
https://api.github.com/repos/huggingface/transformers/issues/44871
[Gemma-3] Inconsistent eos_token_id configuration: tokenizer has single value (1) but model.config has list [1, 106]
### System Info - `transformers` version: 5.3.0 - Platform: Windows-11-10.0.26100-SP0 - Python version: 3.12.11 - Huggingface_hub version: 1.7.1 - Safetensors version: 0.7.0 - Accelerate version: not installed - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (accelerator?): 2.7.1+cu1...
closed
completed
false
1
[ "bug" ]
[]
2026-03-20T04:29:57Z
2026-03-21T01:40:09Z
2026-03-21T01:40:09Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
IvanFan-Van
98,149,954
U_kgDOBdmmQg
User
false
huggingface/transformers
4,106,938,437
I_kwDOCUB6oc70yuhF
44,877
https://github.com/huggingface/transformers/issues/44877
https://api.github.com/repos/huggingface/transformers/issues/44877
Strict config prevents loading `granite_speech` config
### System Info - `transformers` version: 5.3.0.dev0 - Platform: Windows-10-10.0.26200-SP0 - Python version: 3.11.13 - Huggingface_hub version: 1.7.1 - Safetensors version: 0.6.2 - Accelerate version: 1.11.0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (accelerator?): 2.9.0+cu...
closed
completed
false
4
[ "bug" ]
[]
2026-03-20T09:57:02Z
2026-03-27T09:30:18Z
2026-03-27T09:30:17Z
MEMBER
null
20260407T090028Z
2026-04-07T09:00:28Z
tomaarsen
37,621,491
MDQ6VXNlcjM3NjIxNDkx
User
false
huggingface/transformers
4,109,855,475
I_kwDOCUB6oc7092rz
44,898
https://github.com/huggingface/transformers/issues/44898
https://api.github.com/repos/huggingface/transformers/issues/44898
[BUG] Perceiver image classification (non-default res) fails even with interpolate_pos_encoding=True
### System Info * `transformers` version: `5.0.0.dev0` * Platform: `Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39` * Python version: `3.12.3` * `huggingface_hub` version: `1.3.2` * `safetensors` version: `0.7.0` * `accelerate` version: `1.12.0` * Accelerate config: `not installed` * DeepSpeed version:...
closed
completed
false
0
[ "bug" ]
[]
2026-03-20T19:58:09Z
2026-04-18T09:06:59Z
2026-03-25T11:48:03Z
CONTRIBUTOR
null
20260418T100536Z
2026-04-18T10:05:36Z
harshaljanjani
75,426,551
MDQ6VXNlcjc1NDI2NTUx
User
false
huggingface/transformers
4,111,564,283
I_kwDOCUB6oc71EX37
44,906
https://github.com/huggingface/transformers/issues/44906
https://api.github.com/repos/huggingface/transformers/issues/44906
Remove unnecessary `expand_as` in `get_placeholder_mask` across VLMs
### Feature request ## Problem The `get_placeholder_mask` function (and equivalent inline patterns) across ~70 multimodal model files expands a boolean placeholder mask from shape `(B, S, 1)` to `(B, S, H)` via `.expand_as(inputs_embeds)` before passing it to `masked_scatter`. This expansion is unnecessary because `m...
open
null
false
0
[ "Feature request" ]
[]
2026-03-21T06:05:36Z
2026-03-21T06:05:36Z
null
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
syncdoth
45,599,998
MDQ6VXNlcjQ1NTk5OTk4
User
false
huggingface/transformers
4,111,661,658
I_kwDOCUB6oc71Evpa
44,908
https://github.com/huggingface/transformers/issues/44908
https://api.github.com/repos/huggingface/transformers/issues/44908
inverse_sqrt scheduler ignores lr_scheduler_kwargs (timescale not passed)
### System Info Incomplete arguments passed for schedulers where name is explicitly checked. https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/optimization.py#L664 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - ...
closed
completed
false
1
[ "bug" ]
[]
2026-03-21T06:53:50Z
2026-03-24T13:06:18Z
2026-03-24T13:06:18Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
magarwal0205
6,791,598
MDQ6VXNlcjY3OTE1OTg=
User
false
huggingface/transformers
4,112,866,232
I_kwDOCUB6oc71JVu4
44,910
https://github.com/huggingface/transformers/issues/44910
https://api.github.com/repos/huggingface/transformers/issues/44910
[Bug] Flash Attention crashes with illegal memory access on Qwen3.5 due to 3D position_ids being misinterpreted as packed sequence
### System Info # [Bug] Flash Attention crashes with `illegal memory access` on Qwen3.5 due to 3D `position_ids` being misinterpreted as packed sequence We fixed it in https://github.com/ouroborosscr/transformers/tree/fix/qwen35-flash-attn-3d-position-ids ## Description When using `attn_implementation="flash_attent...
closed
completed
false
6
[ "bug" ]
[]
2026-03-21T15:38:54Z
2026-03-25T02:03:24Z
2026-03-24T14:36:36Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
ouroborosscr
75,082,702
MDQ6VXNlcjc1MDgyNzAy
User
false
huggingface/transformers
4,113,147,717
I_kwDOCUB6oc71KadF
44,912
https://github.com/huggingface/transformers/issues/44912
https://api.github.com/repos/huggingface/transformers/issues/44912
git-oss-20b will not properly load with MXFP4 quantization and falls back to bf16
### System Info Hi, probably this is related to https://github.com/huggingface/transformers/issues/42723 I get: MXFP4 quantization requires Triton and kernels installed: CUDA requires Triton >= 3.4.0, XPU requires Triton >= 3.5.0, we will default to dequantizing the model to bf16 When executing the following code: p...
closed
completed
false
5
[ "bug" ]
[]
2026-03-21T17:38:02Z
2026-03-24T15:33:17Z
2026-03-24T15:33:17Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
jottbecr
126,362,918
U_kgDOB4glJg
User
false
huggingface/transformers
4,113,212,978
I_kwDOCUB6oc71KqYy
44,913
https://github.com/huggingface/transformers/issues/44913
https://api.github.com/repos/huggingface/transformers/issues/44913
In GPTNeoXConfig, rotary_pct silently reverts to default on reload
### System Info - `transformers` version: 5.3.0 - Platform: Linux-6.17.0-19-generic-x86_64-with-glibc2.39 - Python version: 3.12.4 - Huggingface_hub version: 1.7.2 - Safetensors version: 0.4.5 - Accelerate version: not installed - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (accel...
closed
completed
false
3
[ "bug" ]
[]
2026-03-21T17:58:32Z
2026-03-27T09:32:37Z
2026-03-27T09:19:47Z
NONE
null
20260407T090028Z
2026-04-07T09:00:28Z
ratishsp
3,006,607
MDQ6VXNlcjMwMDY2MDc=
User
false
huggingface/transformers
4,114,204,538
I_kwDOCUB6oc71Ocd6
44,918
https://github.com/huggingface/transformers/issues/44918
https://api.github.com/repos/huggingface/transformers/issues/44918
Unpacking Qwen3.5 input embeddings fails with trl SFT trainer
### System Info My environment: ``` datasets==4.6.1 faiss-cpu==1.13.2 numpy==2.4.2 pyserini==1.5.0 sentence-transformers==5.2.3 torch==2.10.0 torchvision==0.25.0 tqdm==4.67.3 trl==0.29.1 wandb==0.25.1 ``` ### Who can help? @zucchini-nlp ### Information - [ ] The official example scripts - [x] My own modified scri...
closed
completed
false
3
[ "bug" ]
[]
2026-03-21T23:43:15Z
2026-03-23T15:20:11Z
2026-03-23T15:20:11Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
JakobJBauer
53,357,351
MDQ6VXNlcjUzMzU3MzUx
User
false
huggingface/transformers
4,116,497,241
I_kwDOCUB6oc71XMNZ
44,928
https://github.com/huggingface/transformers/issues/44928
https://api.github.com/repos/huggingface/transformers/issues/44928
[Bug] Catastrophic gradient explosion (NaN) in RLHF with Qwen3.5 due to 3D position_ids forcing SDPA Math fallback and BF16 collapse
### System Info Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 5.3.0 - Platform: Linux-6.8.0-40-generic-x86_64-with-glibc2.35 - Python version: 3.11.15 - Huggingface_hub version: 1.7.1 - Safetensors version: 0.7.0 - Accelerate version: 1.13.0 - Accelerat...
open
null
false
3
[ "bug" ]
[]
2026-03-22T16:46:05Z
2026-03-25T13:51:26Z
null
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
ouroborosscr
75,082,702
MDQ6VXNlcjc1MDgyNzAy
User
false
huggingface/transformers
4,116,586,484
I_kwDOCUB6oc71Xh_0
44,929
https://github.com/huggingface/transformers/issues/44929
https://api.github.com/repos/huggingface/transformers/issues/44929
First-class fine-tuning support for Mamba / Mamba-2 SSMs — architecture is production-ready, but the training path in Transformers isn't
### Feature request You can load Mamba models in Transformers — but the moment you try to actually fine-tune one, things fall apart fast. The standard Trainer was built around attention + KV cache assumptions that SSMs simply don't share. Gradient checkpointing breaks in weird ways, DataCollatorForLanguageModeling doe...
open
null
false
5
[ "Feature request" ]
[]
2026-03-22T17:23:58Z
2026-04-02T14:59:27Z
null
NONE
null
20260407T090028Z
2026-04-07T09:00:28Z
lochanharishwar
221,997,305
U_kgDODTto-Q
User
false
huggingface/transformers
4,116,661,565
I_kwDOCUB6oc71X0U9
44,933
https://github.com/huggingface/transformers/issues/44933
https://api.github.com/repos/huggingface/transformers/issues/44933
Nonexistant import from image_utils
### System Info I was getting the following error when running the latest version of main `ImportError: cannot import name 'PILImageResampling' from 'transformers.image_utils' (/Users/josh/Documents/sandbox/.hugging/lib/python3.12/site-packages/transformers/image_utils.py)` I found where it's being imported in src/t...
open
null
false
8
[ "bug" ]
[]
2026-03-22T17:57:33Z
2026-03-23T13:47:57Z
null
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
josh-kean
40,722,578
MDQ6VXNlcjQwNzIyNTc4
User
false
huggingface/transformers
4,116,794,040
I_kwDOCUB6oc71YUq4
44,936
https://github.com/huggingface/transformers/issues/44936
https://api.github.com/repos/huggingface/transformers/issues/44936
trainer.evaluate() fails after trainer.train()
### System Info - `transformers` version: 5.3.0 - Platform: Windows-11-10.0.26200-SP0 - Python version: 3.13.0 - Huggingface_hub version: 1.7.2 - Safetensors version: 0.7.0 - Accelerate version: 1.13.0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (accelerator?): 2.10.0+cpu (NA...
closed
completed
false
1
[ "bug" ]
[]
2026-03-22T18:48:49Z
2026-04-13T13:47:53Z
2026-04-13T13:47:53Z
NONE
null
20260414T122001Z
2026-04-14T12:20:01Z
HenrikEilers
48,092,550
MDQ6VXNlcjQ4MDkyNTUw
User
false
huggingface/transformers
4,117,554,107
I_kwDOCUB6oc71bOO7
44,937
https://github.com/huggingface/transformers/issues/44937
https://api.github.com/repos/huggingface/transformers/issues/44937
Check out "Google LLC"
<spam>
closed
completed
false
0
[]
[]
2026-03-23T00:54:12Z
2026-03-23T14:10:26Z
2026-03-23T14:10:26Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
mija4264-arch38
251,540,492
U_kgDODv40DA
User
false
huggingface/transformers
4,118,575,281
I_kwDOCUB6oc71fHix
44,938
https://github.com/huggingface/transformers/issues/44938
https://api.github.com/repos/huggingface/transformers/issues/44938
transformers in Python 3.14 failed to load
### System Info Python 3.14 ### Who can help? _No response_ ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction transfo...
closed
completed
false
3
[ "bug" ]
[]
2026-03-23T06:27:08Z
2026-03-24T13:10:15Z
2026-03-24T12:29:27Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
resc863
50,689,611
MDQ6VXNlcjUwNjg5NjEx
User
false
huggingface/transformers
4,120,161,600
I_kwDOCUB6oc71lK1A
44,945
https://github.com/huggingface/transformers/issues/44945
https://api.github.com/repos/huggingface/transformers/issues/44945
Incorrect LLM output when using pipeline parallelism
### System Info transformers==4.57.1 Python==3.12.12 Kaggle env ### Who can help? @CyrilVallez @3outeille ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset ...
open
null
false
0
[ "bug" ]
[]
2026-03-23T11:26:59Z
2026-03-23T11:26:59Z
null
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
tasinislam21
33,097,868
MDQ6VXNlcjMzMDk3ODY4
User
false
huggingface/transformers
4,122,908,189
I_kwDOCUB6oc71vpYd
44,955
https://github.com/huggingface/transformers/issues/44955
https://api.github.com/repos/huggingface/transformers/issues/44955
Orgs should be able to agree to 3rd party use agreements so their members can access 3rd party models
### Feature request for CI / CD the only way to get a sane workflow is to make a fake single user that the whole company uses to access the 3rd party resources. This undercuts your enterprise options and the idea that all users should log in as themselves. I know it's probably some legal hangup, but as is you are unde...
open
null
false
4
[ "Feature request" ]
[]
2026-03-23T18:39:58Z
2026-03-31T03:44:13Z
null
NONE
null
20260407T090028Z
2026-04-07T09:00:28Z
asglover
140,220,574
U_kgDOCFuYng
User
false