How to use from the
Use from the
Transformers library
# Load model directly
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

tokenizer = AutoTokenizer.from_pretrained("ISTA-DASLab/switch-base-128_qmoe")
model = AutoModelForSeq2SeqLM.from_pretrained("ISTA-DASLab/switch-base-128_qmoe")
Quick Links

switch-base-128_qmoe

This is the google/switch-base-128 model quantized with the QMoE framework to ternary precision and stored in the custom further compressed QMoE format.

Please see the QMoE repository for how to use this model.

Downloads last month
6
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Space using ISTA-DASLab/switch-base-128_qmoe 1