Mol2Pro-base

Model description

How to use

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch

model_id = "AI4PD/Mol2Pro-base"
tokenizer_id = "AI4PD/Mol2Pro-tokenizer"

# Load tokenizers
tokenizer_mol = AutoTokenizer.from_pretrained(tokenizer_id, subfolder="smiles")
tokenizer_aa  = AutoTokenizer.from_pretrained(tokenizer_id, subfolder="aa")

# Load model
model = AutoModelForSeq2SeqLM.from_pretrained(model_id)

Intended use

Research use only. The model generates candidate sequences conditioned on small-molecule inputs; it does not guarantee binding or function and must be validated experimentally.

Citation

If you find this work useful, please cite:

@article{VicenteSola2026Generalise,
  title   = {Generalise or Memorise? Benchmarking Ligand-Conditioned Protein Generation from Sequence-Only Data},
  author  = {Vicente-Sola, Alex and Dornfeld, Lars and Coines, Joan and Ferruz, Noelia},
  journal = {bioRxiv},
  year    = {2026},
  doi     = {10.64898/2026.02.06.704305},
}
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train AI4PD/Mol2Pro-base

Collection including AI4PD/Mol2Pro-base