Liquid AI
Try LFM β€’ Docs β€’ LEAP β€’ Discord

LFM2.5-350M

LFM2.5 is a new family of hybrid models designed for on-device deployment. It builds on the LFM2 architecture with extended pre-training and reinforcement learning.

  • Best-in-class performance: A 350M model rivaling much larger models, bringing high-quality AI to your pocket.
  • Fast edge inference: 313 tok/s decode on AMD CPU, 188 tok/s on Snapdragon Gen4. Runs under 1GB of memory with day-one support for llama.cpp, MLX, and vLLM.
  • Scaled training: Extended pre-training from 10T to 28T tokens and large-scale multi-stage reinforcement learning.

Find more information about LFM2.5-350M in our blog post.

πŸ’» Demo: https://huggingface.co/spaces/webml-community/lfm2.5-webgpu-summarizer

πŸ—’οΈ Model Details

Model Parameters Description
LFM2.5-350M-Base 350M Pre-trained base model for fine-tuning
LFM2.5-350M 350M General-purpose instruction-tuned model

LFM2.5-350M is a general-purpose text-only model with the following features:

  • Number of parameters: 350M
  • Number of layers: 16 (10 double-gated LIV convolution blocks + 6 GQA blocks)
  • Training budget: 28T tokens
  • Context length: 32,768 tokens
  • Vocabulary size: 65,536
  • Knowledge cutoff: Mid-2024
  • Languages: English, Arabic, Chinese, French, German, Japanese, Korean, Portuguese, Spanish
  • Generation parameters:
    • temperature: 0.1
    • top_k: 50
    • repetition_penalty: 1.05
Model Description
LFM2.5-350M Original model checkpoint in native format. Best for fine-tuning or inference with Transformers and vLLM.
LFM2.5-350M-GGUF Quantized format for llama.cpp and compatible tools. Optimized for CPU inference and local deployment with reduced memory usage.
LFM2.5-350M-ONNX ONNX Runtime format for cross-platform deployment. Enables hardware-accelerated inference across diverse environments (cloud, edge, mobile).
LFM2.5-350M-MLX MLX format for Apple Silicon. Optimized for fast inference on Mac devices using the MLX framework.

We recommend using it for data extraction, structured outputs, and tool use. It is not recommended for knowledge-intensive tasks and programming.

Chat Template

LFM2.5 uses a ChatML-like format. See the Chat Template documentation for details. Example:

<|startoftext|><|im_start|>system
You are a helpful assistant trained by Liquid AI.<|im_end|>
<|im_start|>user
What is C. elegans?<|im_end|>
<|im_start|>assistant

You can use tokenizer.apply_chat_template() to format your messages automatically.

Tool Use

LFM2.5 supports function calling as follows:

  1. Function definition: We recommend providing the list of tools as a JSON object in the system prompt. You can also use the tokenizer.apply_chat_template() function with tools.
  2. Function call: By default, LFM2.5 writes Pythonic function calls (a Python list between <|tool_call_start|> and <|tool_call_end|> special tokens), as the assistant answer. You can override this behavior by asking the model to output JSON function calls in the system prompt.
  3. Function execution: The function call is executed, and the result is returned as a "tool" role.
  4. Final answer: LFM2 interprets the outcome of the function call to address the original user prompt in plain text.

See the Tool Use documentation for the full guide. Example:

<|startoftext|><|im_start|>system
List of tools: [{"name": "get_candidate_status", "description": "Retrieves the current status of a candidate in the recruitment process", "parameters": {"type": "object", "properties": {"candidate_id": {"type": "string", "description": "Unique identifier for the candidate"}}, "required": ["candidate_id"]}}]<|im_end|>
<|im_start|>user
What is the current status of candidate ID 12345?<|im_end|>
<|im_start|>assistant
<|tool_call_start|>[get_candidate_status(candidate_id="12345")]<|tool_call_end|>Checking the current status of candidate ID 12345.<|im_end|>
<|im_start|>tool
[{"candidate_id": "12345", "status": "Interview Scheduled", "position": "Clinical Research Associate", "date": "2023-11-20"}]<|im_end|>
<|im_start|>assistant
The candidate with ID 12345 is currently in the "Interview Scheduled" stage for the position of Clinical Research Associate, with an interview date set for 2023-11-20.<|im_end|>

πŸƒ Inference

LFM2.5 is supported by many inference frameworks. See the Inference documentation for the full list.

Name Description Docs Notebook
Transformers Simple inference with direct access to model internals. Link Colab link
vLLM High-throughput production deployments with GPU. Link Colab link
llama.cpp Cross-platform inference with CPU offloading. Link Colab link
MLX Apple's machine learning framework optimized for Apple Silicon. Link β€”
LM Studio Desktop application for running LLMs locally. Link β€”

Here's a quick start example with Transformers:

from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer

model_id = "LiquidAI/LFM2.5-350M"
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="auto",
    dtype="bfloat16",
#   attn_implementation="flash_attention_2" <- uncomment on compatible GPU
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)

prompt = "What is C. elegans?"

input_ids = tokenizer.apply_chat_template(
    [{"role": "user", "content": prompt}],
    add_generation_prompt=True,
    return_tensors="pt",
    tokenize=True,
).to(model.device)

output = model.generate(
    input_ids,
    do_sample=True,
    temperature=0.1,
    top_k=50,
    repetition_penalty=1.05,
    max_new_tokens=512,
    streamer=streamer,
)

πŸ”§ Fine-Tuning

We recommend fine-tuning LFM2.5 for your specific use case to achieve the best results.

Name Description Docs Notebook
CPT (Unsloth) Continued Pre-Training using Unsloth for text completion. Link Colab link
CPT (Unsloth) Continued Pre-Training using Unsloth for translation. Link Colab link
SFT (Unsloth) Supervised Fine-Tuning with LoRA using Unsloth. Link Colab link
SFT (TRL) Supervised Fine-Tuning with LoRA using TRL. Link Colab link
DPO (TRL) Direct Preference Optimization with LoRA using TRL. Link Colab link
GRPO (Unsloth) GRPO with LoRA using Unsloth. Link Colab link
GRPO (TRL) GRPO with LoRA using TRL. Link Colab link

πŸ“Š Performance

Benchmarks

Model GPQA Diamond MMLU-Pro IFEval IFBench Multi-IF
LFM2.5-350M 30.64 20.01 76.96 40.69 44.92
LFM2-350M 27.58 19.29 64.96 18.20 32.92
Granite 4.0-H-350M 22.32 13.14 61.27 17.22 28.70
Granite 4.0-350M 25.91 12.84 53.48 15.98 24.21
Qwen3.5-0.8B (Instruct) 27.41 37.42 59.94 22.87 41.68
Qwen3.5-0.8B (Thinking) 19.29 -* 32.93 22.00 26.44
Gemma 3 1B IT 23.89 14.04 63.49 20.33 44.25
Model CaseReportBench BFCLv3 BFCLv4 τ²-Bench Telecom τ²-Bench Retail
LFM2.5-350M 32.45 44.11 21.86 18.86 17.84
LFM2-350M 11.67 22.95 12.29 10.82 5.56
Granite 4.0-H-350M 12.44 43.07 13.28 13.74 6.14
Granite 4.0-350M 0.84 39.58 13.73 2.92 6.14
Qwen3.5-0.8B (Instruct) 13.83 35.08 18.70 12.57 6.14
Qwen3.5-0.8B (Thinking) 0.39 39.64 25.39 14.33 7.02
Gemma 3 1B IT 2.28 16.61 7.17 9.36 6.43

*Evaluation could not be completed due to doom looping.

CPU Inference

GPU Inference

πŸ“¬ Contact

Citation

@article{liquidAI2026350M,
  author = {Liquid AI},
  title = {LFM2.5-350M: No Size Left Behind},
  journal = {Liquid AI Blog},
  year = {2026},
  note = {www.liquid.ai/blog/lfm2-5-350m-no-size-left-behind},
}
@article{liquidai2025lfm2,
  title={LFM2 Technical Report},
  author={Liquid AI},
  journal={arXiv preprint arXiv:2511.23404},
  year={2025}
}
Downloads last month
2,991
Safetensors
Model size
0.4B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ 2 Ask for provider support

Model tree for LiquidAI/LFM2.5-350M

Finetuned
(1)
this model
Finetunes
3 models
Quantizations
11 models

Spaces using LiquidAI/LFM2.5-350M 3

Collection including LiquidAI/LFM2.5-350M

Paper for LiquidAI/LFM2.5-350M