How to use mrm8488/switch-base-16-finetuned-xsum-2 with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("mrm8488/switch-base-16-finetuned-xsum-2") model = AutoModelForSeq2SeqLM.from_pretrained("mrm8488/switch-base-16-finetuned-xsum-2")