LFM2.5-1.2B-TR-Base

This model is a Turkish language adaptation of LiquidAI/LFM-2.5-1.2B-Base developed via Continued Pre-training (CPT). It utilizes the Hybrid Liquid Foundation Model architecture, offering high reasoning capabilities within the 1.2B parameter class.

🚀 Model Details

  • Developed by: SkyAsl
  • Base Model: LiquidAI LFM-2.5-1.2B
  • Language: Turkish
  • Architecture: Hybrid (Linear Attention + Convolution)
  • Training Method: LoRA (Rank 128) - Continued Pre-training
  • Total Training Tokens: ~805 Million (0.8B)

📚 Dataset Mix

The model was trained on a diverse and curated mix of Turkish data to ensure excellence in logic, formal knowledge, and contemporary language:

  1. Logic & Mathematics: duxx/orca-math-word-problems-tr (100k samples) - Focused on reasoning and chain-of-thought in Turkish.
  2. General Knowledge: musabg/wikipedia-tr (290k samples) - Extensive encyclopedic knowledge, filtered for long-form content (>500 chars).
  3. Conversational & Fluency: gorkemgoknar/tr_ted_talk_translated (180k samples) - Natural speech patterns and translated talk transcripts.
  4. News & Contemporary Prose: turkish-nlp-suite/Havadis (300k samples) - Modern news language, filtered for high-quality long articles (>1000 chars).

🛠️ Technical Training Specifications

A Pre-packing technique was implemented during the training process, where all text was packed into fixed-length blocks of 4096 tokens to maximize GPU efficiency.

  • LoRA Rank: 128
  • LoRA Alpha: 256
  • Context Length: 4096
  • Learning Rate: 2e-5
  • Epoch: 1
  • Optimizer: Paged AdamW 32bit
  • Precision: bfloat16 (Trained on NVIDIA A100 40GB)

⚠️ Important Note: Base Model Status

This is a Base Model. It has not been fine-tuned for instruction following or chat (SFT). While it has acquired strong Turkish language foundations, it acts as an "autocomplete" engine. To use it as an assistant, Instruction Tuning is highly recommended.

  • Instruction Tuned version is coming soon...

Usage (Inference)

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_id = "SkyAsl/LFM-2.5-1.2B-TR-Base"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id, 
    torch_dtype=torch.bfloat16, 
    device_map="auto", 
    trust_remote_code=True
)

prompt = "Türkiye Cumhuriyeti'nin kurucusu Mustafa Kemal Atatürk,"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")

with torch.no_grad():
    output = model.generate(**inputs, max_new_tokens=50, temperature=0.3)

print(tokenizer.decode(output[0], skip_special_tokens=True))

🙏 Acknowledgements

This work is based on the LiquidAI/LFM2.5-1.2B-Base model. I thank the LiquidAI team for releasing the Hybrid Liquid Foundation Model architecture and enabling this research.

I also acknowledge the creators and maintainers of the datasets used during continued pre-training:

All datasets were used in accordance with their respective licenses and intended research or educational purposes.

📄 License

This model is released under the same license terms as the base model (LiquidAI/LFM2.5-1.2B-Base).
Please refer to the original model card for detailed license information.

Downloads last month
162
Safetensors
Model size
1B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for SkyAsl/LFM2.5-1.2B-TR-Base

Finetuned
(32)
this model
Quantizations
1 model

Datasets used to train SkyAsl/LFM2.5-1.2B-TR-Base