MPNet base trained on GooAQ using QAT with InfoNCE + GOR

This is a sentence-transformers model finetuned from microsoft/mpnet-base on the gooaq dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: microsoft/mpnet-base
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity
  • Training Dataset:
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'MPNetModel'})
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("tomaarsen/mpnet-base-gooaq-qat")
# Run inference
queries = [
    "is duchenne muscular dystrophy a dominant or recessive trait?",
]
documents = [
    'Duchenne muscular dystrophy is inherited in an X-linked recessive pattern. Males have only one copy of the X chromosome from their mother and one copy of the Y chromosome from their father. If their X chromosome has a DMD gene mutation, they will have Duchenne muscular dystrophy.',
    'The dream suggests captivity and it refers to your fear of punishment. Another interpretation of this dream refers to a need to do what you feel is right in waking life. Being in jail suggests that your feelings may be trapped by a limited mind and body. ... Jail also suggests repressed feelings.',
    "An automatic transmission will downshift for you when you drive uphill. However, for moderately steep slopes, it's wise to shift to the gear range marked D2, 2, or L to ascend and descend the hill. For steep slopes that you can't ascend at a speed faster than 10 mph (about 15 kph), shift to D1 or 1.",
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 768] [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[0.8103, 0.1611, 0.2026]])

Evaluation

Metrics

Information Retrieval

Metric gooaq-dev-float32 gooaq-dev-int8 gooaq-dev-binary
cosine_accuracy@1 0.7419 0.7336 0.7171
cosine_accuracy@3 0.8825 0.8753 0.8612
cosine_accuracy@5 0.9237 0.919 0.907
cosine_accuracy@10 0.96 0.9569 0.9488
cosine_precision@1 0.7419 0.7336 0.7171
cosine_precision@3 0.2942 0.2918 0.2871
cosine_precision@5 0.1847 0.1838 0.1814
cosine_precision@10 0.096 0.0957 0.0949
cosine_recall@1 0.7419 0.7336 0.7171
cosine_recall@3 0.8825 0.8753 0.8612
cosine_recall@5 0.9237 0.919 0.907
cosine_recall@10 0.96 0.9569 0.9488
cosine_ndcg@10 0.8537 0.8481 0.8346
cosine_mrr@10 0.8192 0.8128 0.7978
cosine_map@100 0.8212 0.8149 0.8002

Training Details

Training Dataset

gooaq

  • Dataset: gooaq at b089f72
  • Size: 90,000 training samples
  • Columns: question and answer
  • Approximate statistics based on the first 1000 samples:
    question answer
    type string string
    details
    • min: 8 tokens
    • mean: 11.83 tokens
    • max: 20 tokens
    • min: 15 tokens
    • mean: 60.45 tokens
    • max: 180 tokens
  • Samples:
    question answer
    how long does halifax take to transfer mortgage funds? Bear in mind that the speed of application will vary depending on your own personal circumstances and the lender's present day-to-day performance. In some cases, applications can be approved by the lender within 24 hours, while some can take weeks or even months.
    can you get a false pregnancy test? In very rare cases, you can have a false-positive result. This means you're not pregnant but the test says you are. You could have a false-positive result if you have blood or protein in your pee. Certain drugs, such as tranquilizers, anticonvulsants, hypnotics, and fertility drugs, could cause false-positive results.
    are ahead of its time? Definition of ahead of one's/its time : too advanced or modern to be understood or appreciated during the time when one lives or works As a director, he was ahead of his time.
  • Loss: QuantizationAwareLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "quantization_precisions": [
            "float32",
            "int8",
            "binary"
        ],
        "quantization_weights": [
            1.0,
            1.0,
            1.0
        ],
        "n_precisions_per_step": -1
    }
    

Evaluation Dataset

gooaq

  • Dataset: gooaq at b089f72
  • Size: 10,000 evaluation samples
  • Columns: question and answer
  • Approximate statistics based on the first 1000 samples:
    question answer
    type string string
    details
    • min: 8 tokens
    • mean: 11.93 tokens
    • max: 25 tokens
    • min: 14 tokens
    • mean: 60.84 tokens
    • max: 127 tokens
  • Samples:
    question answer
    should you take ibuprofen with high blood pressure? In general, people with high blood pressure should use acetaminophen or possibly aspirin for over-the-counter pain relief. Unless your health care provider has said it's OK, you should not use ibuprofen, ketoprofen, or naproxen sodium. If aspirin or acetaminophen doesn't help with your pain, call your doctor.
    how old do you have to be to work in sc? The general minimum age of employment for South Carolina youth is 14, although the state allows younger children who are performers to work in show business. If their families are agricultural workers, children younger than age 14 may also participate in farm labor.
    how to write a topic proposal for a research paper? ['Write down the main topic of your paper. ... ', 'Write two or three short sentences under the main topic that explain why you chose that topic. ... ', 'Write a thesis sentence that states the angle and purpose of your research paper. ... ', 'List the items you will cover in the body of the paper that support your thesis statement.']
  • Loss: QuantizationAwareLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "quantization_precisions": [
            "float32",
            "int8",
            "binary"
        ],
        "quantization_weights": [
            1.0,
            1.0,
            1.0
        ],
        "n_precisions_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 64
  • learning_rate: 2e-05
  • num_train_epochs: 1
  • warmup_ratio: 0.1
  • bf16: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 64
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: None
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • project: huggingface
  • trackio_space_id: trackio
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: no
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: True
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss Validation Loss gooaq-dev-float32_cosine_ndcg@10 gooaq-dev-int8_cosine_ndcg@10 gooaq-dev-binary_cosine_ndcg@10
-1 -1 - - 0.2155 0.5116 0.3432
0.0007 1 8.8919 - - - -
0.0505 71 4.6028 - - - -
0.1002 141 - 0.3973 0.7842 0.7799 0.7606
0.1009 142 0.8168 - - - -
0.1514 213 0.4967 - - - -
0.2004 282 - 0.2611 0.8125 0.8082 0.7879
0.2018 284 0.4427 - - - -
0.2523 355 0.4156 - - - -
0.3006 423 - 0.2213 0.8282 0.8230 0.8047
0.3028 426 0.3245 - - - -
0.3532 497 0.3354 - - - -
0.4009 564 - 0.2026 0.8333 0.8291 0.8129
0.4037 568 0.2926 - - - -
0.4542 639 0.317 - - - -
0.5011 705 - 0.1854 0.8384 0.8340 0.8192
0.5046 710 0.2779 - - - -
0.5551 781 0.278 - - - -
0.6013 846 - 0.1768 0.8440 0.8398 0.8245
0.6055 852 0.2696 - - - -
0.6560 923 0.2752 - - - -
0.7015 987 - 0.1679 0.8504 0.8449 0.8287
0.7065 994 0.2318 - - - -
0.7569 1065 0.2398 - - - -
0.8017 1128 - 0.1621 0.8498 0.8454 0.8317
0.8074 1136 0.2274 - - - -
0.8579 1207 0.2376 - - - -
0.9019 1269 - 0.1572 0.8518 0.8464 0.8305
0.9083 1278 0.238 - - - -
0.9588 1349 0.2168 - - - -
-1 -1 - - 0.8537 0.8481 0.8346

Environmental Impact

Carbon emissions were measured using CodeCarbon.

  • Energy Consumed: 0.091 kWh
  • Carbon Emitted: 0.024 kg of CO2
  • Hours Used: 0.293 hours

Training Hardware

  • On Cloud: No
  • GPU Model: 1 x NVIDIA GeForce RTX 3090
  • CPU Model: 13th Gen Intel(R) Core(TM) i7-13700K
  • RAM Size: 31.78 GB

Framework Versions

  • Python: 3.11.6
  • Sentence Transformers: 5.3.0.dev0
  • Transformers: 4.57.6
  • PyTorch: 2.10.0+cu126
  • Accelerate: 1.12.0
  • Datasets: 4.3.0
  • Tokenizers: 0.22.2

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

QuantizationAwareLoss

@article{jacob2018quantization,
    title={Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference},
    author={Jacob, Benoit and Kligys, Skirmantas and Chen, Bo and Zhu, Menglong and Tang, Matthew and Howard, Andrew and Adam, Hartwig and Kalenichenko, Dmitry},
    journal={arXiv preprint arXiv:1712.05877},
    year={2018}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
9
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for tomaarsen/mpnet-base-gooaq-qat

Finetuned
(127)
this model

Dataset used to train tomaarsen/mpnet-base-gooaq-qat

Papers for tomaarsen/mpnet-base-gooaq-qat

Evaluation results