RoboInter-VLM: Vision-Language Model for RoboInter Manipulation Suite
This is the flagship model of the RoboInter-VLM series, based on Qwen2.5-VL-7B-Instruct. It delivers the strongest performance among the Qwen2.5-VL variants and is the recommended default checkpoint for general use.
Developed as part of the RoboInter project. The model is fine-tuned on the RoboInter-VQA dataset for intermediate representation understanding and generation in robotic manipulation.
All Available Checkpoints
| Checkpoint | Base Model | Architecture | Parameters | Description | Link |
|---|---|---|---|---|---|
RoboInter-VLM (this repo) |
Qwen2.5-VL-7B-Instruct | Qwen2.5-VL | ~7B | Flagship model, recommended for best performance | https://huggingface.co/InternRobotics/RoboInter-VLM |
RoboInter-VLM_qwenvl25_3b |
Qwen2.5-VL-3B-Instruct | Qwen2.5-VL | ~3B | Lightweight model, suitable for efficient deployment | https://huggingface.co/InternRobotics/RoboInter-VLM_qwenvl25_3b |
RoboInter-VLM_llavaov_7B |
LLaVA-OneVision-Qwen2-7B | LLaVA-OneVision | ~7B | LLaVA-OneVision backbone with SigLIP vision encoder | https://huggingface.co/InternRobotics/RoboInter-VLM_llavaov_7B |
All checkpoints are stored in safetensors format with bfloat16 precision.
Supported Tasks
These models are jointly trained on general VQA and three categories of our curated VQA tasks:
- Generation: Predicting intermediate representations such as trajectory waypoints, gripper bounding boxes, contact points/boxes, object bounding boxes (current & final), etc.
- Understanding: Multiple-choice visual reasoning about contact states, grasp poses, object grounding, trajectory selection, movement directions, etc.
- Task Planning: High-level task planning including next-step prediction, action primitive recognition, success determination, etc.
Usage
Quick Start (This Model)
from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor
model_path = "InternRobotics/RoboInter-VLM"
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
model_path, torch_dtype="auto", device_map="auto"
)
processor = AutoProcessor.from_pretrained(model_path)
For detailed usage and inference examples, please refer to the RoboInterVLM-QwenVL codebase.
LLaVA-OneVision Checkpoint
For loading and inference with the LLaVA-OneVision checkpoint, please refer to the RoboInterVLM-LLaVAOV codebase, as it requires custom model classes.
Training & Evaluation
For full training and evaluation pipelines, please refer to:
- Qwen2.5-VL models: RoboInterVLM-QwenVL
- LLaVA-OneVision model: RoboInterVLM-LLaVAOV
- VQA Dataset: RoboInter-VQA
Related Resources
- Project: RoboInter
- Annotation Data: RoboInter-Data
- VQA Dataset: RoboInter-VQA
License
Please refer to the original licenses of RoboInter, Qwen2.5-VL, and LLaVA-OneVision.
- Downloads last month
- -
Model tree for InternRobotics/RoboInter-VLM
Base model
Qwen/Qwen2.5-VL-7B-Instruct