Dataset Viewer (First 5GB)
Auto-converted to Parquet Duplicate
doc_id
stringlengths
25
45
embedding
sequencelengths
1.02k
1.02k
msmarco_v2.1_doc_00_0#0_0
[ 0.790342390537262, -0.4273854196071625, -1.5007789134979248, -0.437446266412735, 0.6086592078208923, 0.40308910608291626, 0.23749640583992004, -0.029282955452799797, -0.4993256628513336, 1.321644902229309, 0.6067504286766052, 0.4735046625137329, -0.24119703471660614, -0.09756556153297424, ...
msmarco_v2.1_doc_00_0#1_1557
[ 0.4691382050514221, -0.22245383262634277, -1.4951627254486084, -0.36970072984695435, 0.3144569993019104, 0.1951887458562851, 0.2845414876937866, 0.20064856112003326, -0.3291930854320526, 1.0401031970977783, 0.3873733878135681, 0.16438767313957214, -0.2605060935020447, 0.2416290044784546, ...
msmarco_v2.1_doc_00_0#2_3101
[ 0.4817461669445038, 0.09276364743709564, -0.772046685218811, 0.02225506864488125, 0.37187686562538147, -0.09929192066192627, 0.3977202773094177, -0.11755246669054031, -0.30892837047576904, 0.5972033739089966, 0.20542801916599274, 0.24813778698444366, -0.2702054977416992, 0.6310102939605713...
msmarco_v2.1_doc_00_0#3_4486
[ 0.6019300818443298, -0.005407208576798439, -0.6481044292449951, -0.5804418921470642, 0.748222827911377, 0.008996432647109032, 0.3123280107975006, -0.3917403221130371, -0.692926824092865, 0.9516515731811523, 0.3023480772972107, 0.22110985219478607, 0.03566057235002518, 0.7401199340820312, ...
msmarco_v2.1_doc_00_0#4_5974
[ 0.6042688488960266, 0.3553147315979004, -0.4733702540397644, -0.43998169898986816, 0.8608529567718506, 0.005820029880851507, 0.3479354679584503, -0.07431574910879135, -0.833602786064148, 0.8444046974182129, 0.5243826508522034, 0.3229714035987854, 0.48214638233184814, 0.5557997226715088, ...
msmarco_v2.1_doc_00_0#5_7440
[ 0.5671915411949158, 0.158855602145195, -1.1311267614364624, -0.41114017367362976, 0.47628703713417053, 0.19895108044147491, 0.19851364195346832, 0.04222629591822624, -0.8597207069396973, 1.3217275142669678, 0.7177708148956299, 0.43448859453201294, -0.03492719307541847, 0.2769053876399994, ...
msmarco_v2.1_doc_00_0#6_9130
[ 0.6930720806121826, -0.06463196873664856, -1.6173101663589478, -0.4359981417655945, 0.34281426668167114, 0.13410533964633942, 0.10105857253074646, -0.043327342718839645, -1.0308566093444824, 1.364073634147644, 0.19653137028217316, 0.4396226704120636, -0.6735469102859497, 0.6536365747451782...
msmarco_v2.1_doc_00_4810#0_10354
[ 1.1574267148971558, -0.3490341901779175, 0.017763199284672737, 0.7019979357719421, -0.5082696080207825, 0.2652971148490906, -0.654744029045105, -0.0258918646723032, 0.15204273164272308, -0.08796592056751251, 0.5742664337158203, 0.1441207230091095, -1.4664835929870605, 0.3930320143699646, ...
msmarco_v2.1_doc_00_4810#1_13812
[ 1.081070899963379, -0.8897829651832581, -0.10265310108661652, 0.30447104573249817, -0.5059955716133118, 0.6729300022125244, -0.232589989900589, 0.12555216252803802, -0.2527752220630646, 0.42229101061820984, 0.7832574844360352, 0.1778927594423294, -1.5308632850646973, 0.5972048044204712, ...
msmarco_v2.1_doc_00_4810#2_16701
[0.7049287557601929,-0.8598765730857849,-0.44110047817230225,0.15472549200057983,-0.633324384689331,(...TRUNCATED)
End of preview. Expand in Data Studio

Alibaba GTE-Large-V1.5 Embeddings for MSMARCO V2.1 for TREC-RAG

This dataset contains the embeddings for the MSMARCO-V2.1 dataset which is used as the corpora for TREC RAG All embeddings are created using GTE Large V1.5 and are intended to serve as a simple baseline for dense retrieval-based methods. Note, that the embeddings are not normalized so you will need to normalize them before usage.

Retrieval Performance

Retrieval performance for the TREC DL21-23, MSMARCOV2-Dev and Raggy Queries can be found below with BM25 as a baseline. For both systems, retrieval is at the segment level and Doc Score = Max (passage score). Retrieval is done via a dot product and happens in BF16.

NDCG @ 10

Dataset BM25 GTE-Large-v1.5
Deep Learning 2021 0.5778 0.7193
Deep Learning 2022 0.3576 0.5358
Deep Learning 2023 0.3356 0.4642
msmarcov2-dev N/A 0.3538
msmarcov2-dev2 N/A 0.3470
Raggy Queries 0.4227 0.5678
TREC RAG (eval) N/A 0.5676

Recall @ 100

Dataset BM25 GTE-Large-v1.5
Deep Learning 2021 0.3811 0.4156
Deep Learning 2022 0.233 0.31173
Deep Learning 2023 0.3049 0.35236
msmarcov2-dev 0.6683 0.85135
msmarcov2-dev2 0.6771 0.84333
Raggy Queries 0.2807 0.35125
TREC RAG (eval) N/A 0.25223

Recall @ 1000

Dataset BM25 GTE-Large-v1.5
Deep Learning 2021 0.7115 0.73185
Deep Learning 2022 0.479 0.55174
Deep Learning 2023 0.5852 0.6167
msmarcov2-dev 0.8528 0.93549
msmarcov2-dev2 0.8577 0.93997
Raggy Queries 0.5745 0.63515
TREC RAG (eval) N/A 0.63133

Loading the dataset

Loading the document embeddings

You can either load the dataset like this:

from datasets import load_dataset
docs = load_dataset("spacemanidol/msmarco-v2.1-gte-large-en-v1.5", split="train")

Or you can also stream it without downloading it before:

from datasets import load_dataset
docs = load_dataset("spacemanidol/msmarco-v2.1-gte-large-en-v1.5",  split="train", streaming=True)
for doc in docs:
    doc_id = j['docid']
    url = doc['url']
    text = doc['text']
    emb = doc['embedding']

Note, The full dataset corpus is ~ 620GB so it will take a while to download and may not fit on some devices/

Search

A full search example (on the first 1,000 paragraphs):

from datasets import load_dataset
import torch
from transformers import AutoModel, AutoTokenizer
import numpy as np


top_k = 100
docs_stream = load_dataset("spacemanidol/msmarco-v2.1-gte-large-en-v1.5",split="train", streaming=True)

docs = []
doc_embeddings = []

for doc in docs_stream:
    docs.append(doc)
    doc_embeddings.append(doc['embedding'])
    if len(docs) >= top_k:
        break

doc_embeddings = np.asarray(doc_embeddings)


model = AutoModel.from_pretrained('Alibaba-NLP/gte-large-en-v1.5', trust_remote_code=True)

tokenizer = AutoTokenizer.from_pretrained('Alibaba-NLP/gte-large-en-v1.5')
model.eval()

query_prefix = ''
queries  = ['how do you clean smoke off walls']
queries_with_prefix = ["{}{}".format(query_prefix, i) for i in queries]
query_tokens = tokenizer(queries_with_prefix, padding=True, truncation=True, return_tensors='pt', max_length=512)

# Compute token embeddings
with torch.no_grad():
    query_embeddings = model(**query_tokens)[0][:, 0]


# normalize embeddings
query_embeddings = torch.nn.functional.normalize(query_embeddings, p=2, dim=1)
doc_embeddings = torch.nn.functional.normalize(doc_embeddings, p=2, dim=1)

# Compute dot score between query embedding and document embeddings
dot_scores = np.matmul(query_embeddings, doc_embeddings.transpose())[0]
top_k_hits = np.argpartition(dot_scores, -top_k)[-top_k:].tolist()

# Sort top_k_hits by dot score
top_k_hits.sort(key=lambda x: dot_scores[x], reverse=True)

# Print results
print("Query:", queries[0])
for doc_id in top_k_hits:
    print(docs[doc_id]['doc_id'])
    print(docs[doc_id]['text'])
    print(docs[doc_id]['url'], "\n")
Downloads last month
180