Attention Outputs
#2
by saburq - opened
Hi, I'm trying to access the attention weights for the dinov3-vits16-pretrain-lvd1689m model using the transformers library in Python. I've loaded the model with output_attentions=True, but the outputs.attentions attribute is always None.
Could you please clarify if this model is expected to output attention weights, and if so, how can I access them from the model's output?
Thanks!
Hi,
Try model.config._attn_implementation = 'eager' before running it on an input image.