Papers
arxiv:2603.12572

LMEB: Long-horizon Memory Embedding Benchmark

Published on Mar 13
· Submitted by
Xinping Zhao
on Mar 16
#1 Paper of the day
Authors:
,
,
,
,
,
,
,
,

Abstract

A new benchmark evaluates embedding models' ability to handle long-horizon memory retrieval tasks, revealing that performance in traditional passage retrieval does not generalize to complex memory retrieval scenarios.

AI-generated summary

Memory embeddings are crucial for memory-augmented systems, such as OpenClaw, but their evaluation is underexplored in current text embedding benchmarks, which narrowly focus on traditional passage retrieval and fail to assess models' ability to handle long-horizon memory retrieval tasks involving fragmented, context-dependent, and temporally distant information. To address this, we introduce the Long-horizon Memory Embedding Benchmark (LMEB), a comprehensive framework that evaluates embedding models' capabilities in handling complex, long-horizon memory retrieval tasks. LMEB spans 22 datasets and 193 zero-shot retrieval tasks across 4 memory types: episodic, dialogue, semantic, and procedural, with both AI-generated and human-annotated data. These memory types differ in terms of level of abstraction and temporal dependency, capturing distinct aspects of memory retrieval that reflect the diverse challenges of the real world. We evaluate 15 widely used embedding models, ranging from hundreds of millions to ten billion parameters. The results reveal that (1) LMEB provides a reasonable level of difficulty; (2) Larger models do not always perform better; (3) LMEB and MTEB exhibit orthogonality. This suggests that the field has yet to converge on a universal model capable of excelling across all memory retrieval tasks, and that performance in traditional passage retrieval may not generalize to long-horizon memory retrieval. In summary, by providing a standardized and reproducible evaluation framework, LMEB fills a crucial gap in memory embedding evaluation, driving further advancements in text embedding for handling long-term, context-dependent memory retrieval. LMEB is available at https://github.com/KaLM-Embedding/LMEB.

Community

Paper author Paper submitter

Welcome to the Long-horizon Memory Embedding Benchmark (LMEB)! Unlike existing text embedding benchmarks that narrowly focus on passage retrieval, LLMEB is designed to evaluate embedding models' ability to handle complex, long-horizon memory retrieval tasks, focusing on fragmented, context-dependent, and temporally distant information. LMEB spans 22 diverse datasets and 193 retrieval tasks across 4 memory types.

By evaluating the memory retrieval capabilities of embedding models, a crucial ability for memory-augmented systems like OpenClaw, LMEB helps OpenClaw identify the most suitable embedding models, enhancing its ability to adapt, remember, and make personalized, user-aware decisions.

Paper author Paper submitter

The code is being prepared.

Interesting breakdown of this paper on arXivLens: https://arxivlens.com/PaperView/Details/lmeb-long-horizon-memory-embedding-benchmark-6649-f33fe845
Covers the executive summary, detailed methodology, and practical applications.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2603.12572 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.12572 in a Space README.md to link it from this page.

Collections including this paper 5