nyu-mll/glue
Viewer • Updated • 1.49M • 463k • 495
How to use brianhuster/MRPC-bert with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="brianhuster/MRPC-bert") # Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("brianhuster/MRPC-bert")
model = AutoModelForSequenceClassification.from_pretrained("brianhuster/MRPC-bert")This model is a fine-tuned version of bert-base-uncased on the GLUE MRPC dataset.
The following hyperparameters were used during training:
#Running model with Python
from transformers import pipeline
classifier = pipeline("text-classification", model="brianhuster/MRPC-bert")
classifier(
"Sentence 1. Sentence 2."
)
Replace "Sentence 1" and "Sentence 2" with your actual input sentence. Each sentence should end with a fullstop, even if they are questions. The model will return LABEL_1 if they are are equivalent in meaning, LABEL_1 otherwise.