How to use hash-map/got_tokenizer with sentence-transformers:
from sentence_transformers import SentenceTransformer model = SentenceTransformer("hash-map/got_tokenizer") sentences = [ "The weather is lovely today.", "It's so sunny outside!", "He drove to the stadium." ] embeddings = model.encode(sentences) similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3]
-