π License β’ π» Code β’ π Project Page β’ π Technical Report β’ π Benchmarks β’ π Getting Started
π― Introduction
Youtu-VL is a lightweight yet robust Vision-Language Model (VLM) built on the Youtu-LLM with 4B parameters. It pioneers Vision-Language Unified Autoregressive Supervision (VLUAS), which markedly strengthens visual perception and multimodal understanding. This enables a standard VLM to perform vision-centric tasks without task-specific additions. Across benchmarks, Youtu-VL stands out for its versatility, achieving competitive results on both vision-centric and general multimodal tasks.
β¨ Key Features
Comprehensive Vision-Centric Capabilities: The model demonstrates strong, broad proficiency across classic vision-centric tasks, delivering competitive performance in visual grounding, image classification, object detection, referring segmentation, semantic segmentation, depth estimation, object counting, and human pose estimation.
Promising Performance with High Efficiency: Despite its compact 4B-parameter architecture, the model achieves competitive results across a wide range of general multimodal tasks, including general visual question answering (VQA), multimodal reasoning and mathematics, optical character recognition (OCR), multi-image and real-world understanding, hallucination evaluation, and GUI agent tasks.
π€ Model Download
| Model Name | Description | Download |
|---|---|---|
| Youtu-VL-4B-Instruct | Visual language model of Youtu-LLM | π€ Model |
| Youtu-VL-4B-Instruct-GGUF | Visual language model of Youtu-LLM, in GGUF format | π€ Model |
π§ Model Architecture Highlights
VisionβLanguage Unified Autoregressive Supervision (VLUAS): Youtu-VL is built on the VLUAS paradigm to mitigate the text-dominant optimization bias in conventional VLMs, where visual signals are treated as passive conditions and fine-grained details are often dropped. Rather than using vision features only as inputs, Youtu-VL expands the text lexicon into a unified multimodal vocabulary through a learned visual codebook, turning visual signals into autoregressive supervision targets. Jointly reconstructing visual tokens and text explicitly preserves dense visual information while strengthening multimodal semantic understanding.
Vision-Centric Prediction with a Standard Architecture (no task-specific modules): Youtu-VL treats image and text tokens with equivalent autoregressive status, empowering it to perform vision-centric tasks for both dense vision prediction (e.g., segmentation, depth) and text-based prediction (e.g., grounding, detection) within a standard VLM architecture, eliminating the need for task-specific additions. This design yields a versitile general-purpose VLM, allowing a single model to flexibly accommodate a wide range of vision-centric and vsion-language requirements.
π Model Performance
Vision-Centric Tasks
General Multimodal Tasks
π Quickstart
This guide will help you quickly deploy and invoke the Youtu-VL-4B-Instruct-GGUF model using llama.cpp.
llama-server -hf tencent/Youtu-VL-4B-Instruct-GGUF:Q8_0 \
--port 8080 \
--image-max-tokens 2048 \
--temp 0.1 \
--top-p 0.001 \
--repeat-penalty 1.05 \
-n 12280 \
--host 0.0.0.0
π Citation
If you find our work useful in your research, please consider citing our paper:
@article{youtu-vl,
title={Youtu-VL: Unleashing Visual Potential via Unified Vision-Language Supervision},
author={Tencent Youtu Lab},
year={2026},
eprint={2601.19798},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2601.19798},
}
@article{youtu-llm,
title={Youtu-LLM: Unlocking the Native Agentic Potential for Lightweight Large Language Models},
author={Tencent Youtu Lab},
year={2025},
eprint={2512.24618},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2512.24618},
}
- Downloads last month
- 2,793
8-bit
16-bit
Model tree for tencent/Youtu-VL-4B-Instruct-GGUF
Base model
tencent/Youtu-VL-4B-Instruct