Denali AI — Vision-Language Models for Garment Classification
Advancing structured attribute extraction from garment images through multi-stage reinforcement learning
Abstract
Denali AI develops and benchmarks vision-language models (VLMs) for structured garment attribute extraction — the task of analyzing a garment image and producing a complete JSON object describing 9 key attributes: type, color, pattern, neckline, sleeve length, closure, brand, size, and defect type.
We systematically evaluate the impact of supervised fine-tuning (SFT), Group Relative Policy Optimization (GRPO), and Group-relative Trajectory-based Policy Optimization (GTPO) across multiple model architectures (Qwen3-VL, Qwen3.5-VL, InternVL3, Florence-2) and scales (0.8B to 122B parameters). Our best model, Qwen3-VL-8B SFT+GRPO, achieves 91.3% weighted score with 100% JSON parse rate on the eval_hard_3500 benchmark.
Leaderboard
| Rank | Model | Architecture | Params | Training | Weighted | SBERT+NLI | JSON% | Throughput |
|---|---|---|---|---|---|---|---|---|
| 1 | Granite4-Vision-3B SFT | Granite4-Vision | 4.5B | SFT | 101.4% | 87.5% | 100% | — |
| 2 | Qwen3-VL-8B SFT+GRPO | Qwen3-VL | 8B | SFT+GRPO | 91.3% | 78.7% | 100% | — |
| 3 | Qwen3-VL-2B SFT+GRPO v9 | Qwen3-VL | 2B | SFT+GRPO | 89.5% | 78.5% | 100% | — |
| 4 | Qwen3-VL-8B SFT+GRPO NVFP4 | Qwen3-VL | 8B | SFT+GRPO+NVFP4 | 89.5% | 77.0% | 100% | — |
| 5 | Qwen3-VL-8B Instruct (Base) | Qwen3-VL | 8B | Zero-shot | 87.5% | 75.6% | 100% | — |
| 6 | Qwen3-VL-8B Instruct NVFP4 | Qwen3-VL | 8B | Zero-shot+NVFP4 | 87.2% | 75.0% | 100% | — |
| 7 | Qwen3.5-2B Base | Qwen3.5-VL | 2B | Zero-shot | 84.4% | 73.0% | 100% | — |
| 8 | Qwen3-VL-2B SFT+GRPO v9 NVFP4 | Qwen3-VL | 2B | SFT+GRPO+NVFP4 | 84.2% | 74.1% | 100% | — |
| 9 | qwen3.5-0.8b-orr-sft | ? | ? | ? | 79.7% | 70.5% | 100% | — |
| 10 | qwen3.5-2b-orr-sft | ? | ? | ? | 79.6% | 69.9% | 100% | — |
| 11 | Qwen3-VL-2B Instruct (Base) | Qwen3-VL | 2B | Zero-shot | 76.4% | 66.7% | 100% | — |
| 12 | InternVL3-2B GRPO+GTPO Full | InternVL3 | 2B | GRPO+GTPO | 72.7% | 64.3% | 100% | — |
| 13 | InternVL3-2B GRPO+GTPO FP8 | InternVL3 | 2B | GRPO+GTPO+FP8 | 72.2% | 63.8% | 100% | — |
| 14 | InternVL3-2B Base | InternVL3 | 2B | Zero-shot | 71.8% | 63.7% | 100% | — |
| 15 | Moondream2 Base | Moondream | 1.6B | Zero-shot | 69.8% | 61.8% | 100% | — |
| 16 | Qwen3.5-2B SFT+GRPO+GTPO v8 | Qwen3.5-VL | 2B | SFT+GRPO+GTPO | 65.3% | 60.1% | 100% | — |
| 17 | phi-4-multimodal-sft | ? | ? | ? | 65.1% | 58.6% | 99% | — |
| 18 | Qwen3.5-2B SFT v7 | Qwen3.5-VL | 2B | SFT | 63.7% | 58.9% | 100% | — |
| 19 | Qwen3.5-35B GPTQ-Int4 | Qwen3.5 MoE | 35B (3B) | Zero-shot | 50.7% | 48.7% | 14% | — |
| 20 | Qwen3.5-9B NVFP4 v10 | Qwen3.5-VL | 9B | Zero-shot | 47.0% | 46.0% | 8% | — |
| 21 | Qwen3.5-9B SFT NVFP4 v11 | Qwen3.5-VL | 9B | SFT+NVFP4 | 46.3% | 45.5% | 8% | — |
| 22 | Qwen3.5-2B NVFP4 v10 | Qwen3.5-VL | 2B | Zero-shot | 42.9% | 42.9% | 0% | — |
| 23 | Qwen3.5-122B-A10B NVFP4 | Qwen3.5 MoE | 122B (10B) | Zero-shot+NVFP4 | 42.9% | 42.9% | 0% | — |
| 24 | Qwen3.5-2B SFT NVFP4 v11 | Qwen3.5-VL | 2B | SFT+NVFP4 | 42.9% | 42.9% | 0% | — |
| 25 | Qwen3.5-2B SFT+GRPO+GTPO NVFP4 | Qwen3.5-VL | 2B | SFT+GRPO+GTPO+NVFP4 | 42.9% | 42.9% | 0% | — |
| 26 | Phi-4 Multimodal NVFP4 | Phi-4 | 5.6B | Zero-shot+NVFP4 | 42.9% | 42.9% | 0% | — |
| 27 | Qwen3-8B FP8 | Qwen3 | 8B | Zero-shot+FP8 | 42.9% | 42.9% | 0% | — |
| 28 | granite4-vision-sft-vllm | ? | ? | ? | 42.9% | 42.9% | 0% | — |
| 29 | granite4-vision-sft-vllm-deepstack | ? | ? | ? | 42.9% | 42.9% | 0% | — |
Task Definition
Given a single garment image, the model must extract 9 structured attributes as a valid JSON object:
{
"type": "t-shirt",
"color": "navy blue",
"pattern": "solid",
"neckline": "crew neck",
"sleeve_length": "short sleeve",
"closure": "pullover",
"brand": "Nike",
"size": "M",
"defect_type": "small hole on left shoulder"
}
Field Importance Weights
Not all fields are equally important. The weighted score uses domain-specific multipliers:
| Field | Weight | Rationale |
|---|---|---|
| Type | 2.5x | Critical for inventory routing and categorization |
| Defect | 2.0x | Directly impacts quality control and pricing |
| Brand | 1.5x | Essential for authentication and valuation |
| Size | 1.5x | Required for accurate listing and search |
| Color, Pattern, Neckline, Sleeve, Closure | 1.0x | Standard descriptive attributes |
Key Results
Per-Field Performance
Accuracy vs Throughput
Key finding: Qwen3-VL-2B v9 achieves the best accuracy-throughput trade-off at 89.5% weighted score and 15.9 samples/s — making it the Pareto-optimal choice for production deployment.
Structured Output Reliability
Fine-tuned models achieve 100% JSON parse rate, while zero-shot baselines (GPTQ, NVFP4) fail to produce valid JSON in 86-100% of cases. This demonstrates that SFT is essential for teaching structured output format, regardless of model scale.
Impact of Training Stages
Left panel: Adding GRPO+GTPO to Qwen3.5-2B improves brand recognition from 15.6% to 24.8% and defect detection from 89.5% to 95.1%, with a +1.6% overall gain.
Right panel: FP8 quantization of InternVL3-2B shows <1% accuracy degradation across all fields while reducing memory footprint, confirming FP8 as a practical deployment optimization.
Model Collections
By Architecture
| Collection | Models | Description |
|---|---|---|
| Qwen3-VL | 3 | Top-performing Qwen3-VL based models (2B, 8B, 8B-NVFP4) |
| Qwen3.5-VL | 7 | Qwen3.5-VL models (0.8B to 122B) |
| InternVL3 | 5 | InternVL3 models (1B, 2B) |
| Florence-2 | 3 | Florence-2 encoder-decoder models |
| Benchmarks | 2 | Evaluation and training datasets |
Training Pipeline
All fine-tuned models follow the Denali-AI Multi-Stage RL Pipeline:
┌─────────────────────────────────────────────────┐
│ Denali-AI Training Pipeline │
└─────────────────────────────────────────────────┘
│
┌─────────────────────┼─────────────────────┐
▼ ▼ ▼
┌──────────┐ ┌──────────────┐ ┌──────────────┐
│ Stage 1 │ │ Stage 2 │ │ Stage 3 │
│ SFT │───────▶│ GRPO │─────▶│ GTPO │
│ (LoRA) │ │ (Rewards) │ │ (Trajectory) │
└──────────┘ └──────────────┘ └──────────────┘
│ │ │
JSON format Field accuracy Coherence &
acquisition optimization regularization
Stage 1: Supervised Fine-Tuning (SFT)
- Method: LoRA (r=16, alpha=32) on frozen base model
- Data: train-10k-balanced-v3 — 10,000 curated samples
- Objective: Teach valid JSON output format and basic field extraction
- Key outcome: 100% JSON parse rate
Stage 2: Group Relative Policy Optimization (GRPO)
- Method: Reward-based RL without a critic model
- Reward engine: 3-layer scoring system
- Layer 1: JSON validity gate (binary)
- Layer 2: Structural correctness (20% weight)
- Layer 3: Per-field content accuracy (80% weight)
- Key outcome: Improved field-level accuracy, especially for challenging fields
Stage 3: Group-relative Trajectory-based Policy Optimization (GTPO)
- Method: Conflict-aware gradient optimization with entropy regularization
- Key outcome: Trajectory-level coherence and reduced field-level conflicts
Evaluation Methodology
Benchmark
All models are evaluated on eval_hard_3500 — a curated benchmark of 3,500 challenging garment images selected for diversity in:
- Garment type (tops, bottoms, dresses, outerwear, accessories)
- Visual complexity (patterns, prints, multi-color)
- Edge cases (ambiguous attributes, partially visible labels)
Metrics — Powered by PeakBench
All evaluation is run through PeakBench, our centralized benchmarking platform. Results are automatically synced to HuggingFace model cards via the PeakBench-HF sync bridge. The canonical metric definitions live in peakbench_metrics.json and are shared between both platforms.
| Metric | Weight (JSON GT) | Model / Method | Description |
|---|---|---|---|
| Structured Match | 60% | Field-level JSON comparison | Per-field presence + value accuracy (null-aware) |
| SBERT Similarity | 25% | all-mpnet-base-v2 | Semantic cosine similarity via sentence embeddings |
| Token Set Ratio | 10% | rapidfuzz | Fuzzy word-set overlap (order-independent) |
| ROUGE-L | 5% | LCS F1 | Longest common subsequence F-measure |
| chrF++ | — | char+word n-grams | Character and word n-gram F-score |
| METEOR | — | stems+synonyms | Alignment with stemming and synonym matching |
| BLEU | — | n-gram precision | BLEU with brevity penalty |
| Levenshtein | — | edit distance | Normalized character-level edit distance |
| Hallucination | — | DeBERTa-v3 NLI | Contradiction detection between prompt and response |
| Consistency | — | SBERT pairwise | Determinism across repeated inference runs |
PeakBench Metric Definitions (click to expand)
PeakBench Quality Score
The headline composite metric. For JSON ground truth (our task), weights are: structured match 60%, SBERT similarity 25%, token set ratio 10%, ROUGE-L 5%. Exact case-insensitive match short-circuits to 1.0.
Structured Match
Per-field JSON comparison. Decomposes into field_match_rate (fraction of expected keys present) and value_accuracy (fraction of matched fields with correct values). Null-aware: treats "N/A", "none", "not visible", etc. as equivalent null values.
SBERT Similarity
Semantic cosine similarity using all-mpnet-base-v2 sentence embeddings. Captures meaning-level similarity — "navy blue" and "dark blue" score high despite different strings.
chrF++ Score
Character and word n-gram F-score. Robust for morphologically rich text and partial matches at the character level.
METEOR Score
Alignment-based metric with stemming and synonym matching. Captures paraphrase similarity — "t-shirt" and "tee shirt" score high.
ROUGE-L Score
Longest common subsequence F1. Measures structural word-order overlap between prediction and ground truth.
BLEU Score
N-gram precision with brevity penalty. Standard MT metric, useful as a surface-level quality signal.
Token Set Ratio
Fuzzy word-set overlap via rapidfuzz. Order-independent — "blue navy" matches "navy blue" perfectly.
Levenshtein Ratio
Normalized character-level edit distance. 1 - (edits / max_length). Catches typos and minor variations.
Hallucination Score
NLI contradiction probability between prompt and response using DeBERTa-v3-base-mnli-fever-anli. Higher = more hallucinated. When contradiction > 0.5, the composite score is penalized.
Consistency (Semantic)
Average pairwise SBERT cosine across multiple inference runs on the same prompt. Measures model determinism. 1.0 = perfectly consistent outputs.
JSON Parse Rate
Percentage of outputs that are valid, parseable JSON. Fine-tuned models achieve 100%; zero-shot models often fail at 0-14%.
Throughput
Samples per second via vLLM on NVIDIA RTX PRO 6000 Blackwell (98 GB VRAM), 8 concurrent workers.
Full metric definitions:
peakbench_metrics.json
All metrics are computed by PeakBench and automatically synced to HuggingFace model cards. The shared metric config ensures both platforms always display the same numbers.
Evaluation Protocol
- Inference: 8 concurrent workers via OpenAI-compatible API (vLLM)
- Samples: All 3,500 samples, no subsampling
- Compute: NVIDIA RTX PRO 6000 Blackwell (98 GB VRAM)
- Reproducibility: Fixed prompts, deterministic sampling (temperature=0)
Key Findings
Best model: Granite4-Vision-3B SFT achieves 101.4% weighted score with 100% JSON parse rate on 3,500 hard samples.
Granite4-Vision dominates. Best Granite4-Vision (101.4%) leads best Qwen3-VL (91.3%) by 10.1pp. Architecture rankings: Granite4-Vision (101%), Qwen3-VL (91%), Qwen3.5-VL (84%), ? (80%), InternVL3 (73%).
SFT is essential for structured output. Fine-tuned models: 76% avg JSON parse rate, best 101.4%. Zero-shot models: 52% avg JSON parse, best 87.5%. Training adds +13.9pp at the top.
NVFP4 quantization costs 5.3pp on average (max 5.3pp) across 1 model pairs, while reducing size ~60% and increasing throughput ~50%.
Hardest fields (on best model): neckline (79%), pattern (80%), type (80%). Easiest: brand (96%), defect (97%), size (100%).
Scale vs efficiency. Best large (Qwen3-VL-8B SFT+GRPO: 91.3%) beats best small (Qwen3-VL-2B SFT+GRPO v9: 89.5%) by 1.8pp — small model is highly competitive for edge deployment.
Benchmark coverage: 29 models across 9 architectures, 12 fine-tuned + 17 zero-shot/quantized.
Research Directions & Future Work
Near-Term Improvements
| Direction | Expected Impact | Rationale |
|---|---|---|
| SFT+GRPO on Moondream | +5-15pp | Zero-shot at 69.8%, fine-tuning consistently adds significant gains |
| SFT+GRPO on Qwen3.5 MoE | +5-15pp | Zero-shot at 50.7%, fine-tuning consistently adds significant gains |
| SFT+GRPO on Phi-4 | +5-15pp | Zero-shot at 42.9%, fine-tuning consistently adds significant gains |
| SFT+GRPO on Qwen3 | +5-15pp | Zero-shot at 42.9%, fine-tuning consistently adds significant gains |
| NVFP4 quantize Granite4-Vision-3B SFT | -1-2pp, +50% speed | At 101.4%, no quantized variant exists yet |
| NVFP4 quantize Qwen3.5-2B SFT+GRPO+GTPO v8 | -1-2pp, +50% speed | At 65.3%, no quantized variant exists yet |
| NVFP4 quantize Qwen3.5-2B SFT v7 | -1-2pp, +50% speed | At 63.7%, no quantized variant exists yet |
| GTPO on Qwen3-VL-8B SFT+GRPO | +1-3pp | Currently SFT+GRPO only, GTPO adds trajectory coherence |
| GTPO on Qwen3-VL-2B SFT+GRPO v9 | +1-3pp | Currently SFT+GRPO only, GTPO adds trajectory coherence |
Architecture Exploration
Models not yet benchmarked — recommended based on current findings:
| Model | Parameters | Why Promising |
|---|---|---|
| Qwen3-VL-3B-Instruct | 3B | Same family as our #1 (Granite4-Vision), mid-range scale |
| InternVL3-8B | 8B | Larger InternVL — may close gap to Qwen3-VL at same scale |
| InternVL3-4B | 4B | Mid-range InternVL — potential efficiency sweet spot |
| SmolVLM2-2.2B-Instruct | 2.2B | HuggingFace's efficient VLM — strong structured output |
| PaliGemma2-3B | 3B | Google VLM with excellent OCR — may improve brand/size fields |
| Phi-4-multimodal-instruct | 5.6B | Microsoft VLM — needs SFT (zero-shot JSON fails) |
| MiniCPM-V-2.6 | 2.8B | Strong small VLM with good OCR capabilities |
| Molmo-7B-D | 7B | Allen AI VLM — strong visual grounding, may help with defect detection |
| Idefics3-8B | 8B | HuggingFace VLM — instruction-following optimized |
| DeepSeek-VL2-Small | 3B | DeepSeek's latest compact VLM — strong reasoning |
Long-Term Research
- Ensemble routing: Route each field to its best-performing model architecture
- Multi-image input: Front + back + tag images simultaneously for higher brand/size accuracy
- Curriculum learning: Progressive difficulty — easy garments first, hard edge cases last
- Synthetic data: Use 122B models to generate training labels at scale
- Active learning: Prioritize annotation of samples where models disagree most
- Guided JSON decoding: Constrained generation to force valid JSON without training
Key Open Questions
- Why does Granite4-Vision outperform Qwen3-VL by 10.1pp at similar scale? Vision encoder, cross-attention, or training data?
- Can RL gains (GRPO/GTPO) be amplified beyond current levels with better reward engineering?
- Is there a parameter sweet spot between 2B and 8B where accuracy saturates?
- Would domain-specific pre-training (garment images) outperform general VLM fine-tuning?
- closure averages only 52% across top-5 models — is the ground truth noisy, or is this genuinely hard?
Datasets
| Dataset | Samples | Purpose | Link |
|---|---|---|---|
| eval_hard_3500 | 3,500 | Evaluation benchmark (hard subset) | Link |
| train_10k_balanced_v3 | 10,000 | Training data (balanced sampling) | Link |
Last updated: 2026-04-04 03:59 UTC
Citation
@misc{denali-ai-2026,
title={Structured Garment Attribute Extraction via Multi-Stage Reinforcement Learning},
author={Denali AI},
year={2026},
publisher={HuggingFace},
url={https://huggingface.co/Denali-AI}
}
License
All models and datasets are released under the Apache 2.0 License.
Contact
- Organization: Denali Advanced Integration
- Issues: GitHub
- HuggingFace: Denali-AI
- Downloads last month
- 12






