Instructions to use AIWizards/MultiPRIDE-DualEncoder-LPFT-es with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use AIWizards/MultiPRIDE-DualEncoder-LPFT-es with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="AIWizards/MultiPRIDE-DualEncoder-LPFT-es")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("AIWizards/MultiPRIDE-DualEncoder-LPFT-es") model = AutoModelForSequenceClassification.from_pretrained("AIWizards/MultiPRIDE-DualEncoder-LPFT-es") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- f0be24d9198fff50f64c80c5c701872235951b216a909024b490622879bc698a
- Size of remote file:
- 5.97 kB
- SHA256:
- 742516bf9b2729cf228dcc21c3fa352716c099d9ec2ab16a80252f5ef189d8c1
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.