Datasets:
The dataset viewer is not available for this split.
Error code: TooBigContentError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Hardness Data Mix - Resolution Sufficiency Dataset
A large-scale dataset of document images with labels indicating the minimum resolution required to accurately answer questions about those documents.
Dataset Description
This dataset contains 81,924 document image-question pairs labeled with resolution sufficiency information. Each sample is annotated with a "hardness" label indicating the minimum resolution level needed to answer questions about that document accurately.
Dataset Summary
- Total Samples: 81,924
- Image Formats: JPEG, PNG
- Resolutions Available: Low (384×384), Medium (512×512), High (768×768+)
- Features: Multi-path image storage (low, mid, high resolution versions)
- Languages: English
- Domains: Mixed document types (text, charts, infographics, documents)
Key Statistics
Class Distribution:
Class 0 (Low res sufficient): 38,537 samples (47.0%)
Class 1 (Medium res needed): 19,929 samples (24.3%)
Class 2 (High res required): 23,458 samples (28.6%)
Total Size: ~4.92 MB (parquet format)
Average Sample Size: ~60 KB
Dataset Fields
| Field | Type | Description |
|---|---|---|
id |
string | Unique sample identifier |
question |
string | Question about the document |
low_path |
string | Path to low-resolution image (384×384) |
mid_path |
string | Path to medium-resolution image (512×512) |
high_path |
string | Path to high-resolution image (768×768+) |
hard |
int | Label: 0=low res enough, 1=medium needed, 2=high needed |
Data Sources
The dataset is a curated mix from multiple established VQA and document understanding benchmarks:
Source Datasets
TextVQA (~25%)
- Text-rich images from scenes and documents
- Focus on reading and understanding text in images
DocVQA (~30%)
- Document-focused question answering
- Scanned document images
ChartQA (~15%)
- Charts and figure understanding
- Questions about data visualization
InfographicVQA (~20%)
- Complex infographic understanding
- Multi-element visual reasoning
HME100K (~10%)
- Handwritten mathematical expressions
- Document analysis
Labeling Strategy
Each sample was labeled based on:
- Resolution Effectiveness Analysis: Performance of VLMs at each resolution level
- Question Complexity: Type and difficulty of the question
- Image Content: Visual elements requiring high resolution
- Error Analysis: Where models fail at lower resolutions
Class Definitions
- Class 0 (Low - 384×384): VLM achieves ≥95% accuracy at low resolution
- Class 1 (Medium - 512×512): VLM needs medium resolution for adequate performance
- Class 2 (High - 768×768+): VLM requires high resolution for accurate answers
Usage
Loading with Hugging Face Datasets
from datasets import load_dataset
# Load full dataset
dataset = load_dataset("Kimhi/hardness_data_mix")
# Access splits
train_split = dataset["train"] # If available
full_data = dataset["hardness_data_mix"]
# Display sample
sample = full_data[0]
print(sample)
Loading with Pandas
import pandas as pd
# Load parquet file
df = pd.read_parquet("hardness_data_mix.parquet")
# Inspect
print(f"Shape: {df.shape}")
print(f"Columns: {df.columns.tolist()}")
print(df.head())
# Get class distribution
print(df['hard'].value_counts().sort_index())
Use in Training
import pandas as pd
from sklearn.model_selection import train_test_split
# Load data
df = pd.read_parquet("hardness_data_mix.parquet")
# Split
train_df, val_df = train_test_split(
df,
test_size=0.1,
stratify=df['hard'],
random_state=42
)
# Use with training scripts
train_df.to_parquet("train_data.parquet")
val_df.to_parquet("val_data.parquet")
Dataset Applications
This dataset is designed for:
Resolution Selection Research
- Training classifiers to predict required resolution
- Understanding resolution vs. accuracy tradeoffs
Efficient VLM Inference
- Optimizing multi-resolution inference
- Reducing computational costs
- Adaptive resolution selection
Model Benchmarking
- Evaluating VLM robustness at different resolutions
- Comparing resolution handling strategies
Academic Research
- Understanding visual information requirements
- Document understanding challenges
Related Models
This dataset is used to train the CARES (Context-Aware Resolution Selection) models:
SmolVLM Resolution Gate
- Model: Kimhi/smolvlm-res-gate
- Approach: Lightweight classifier on frozen features
- Use Case: Fast, on-device inference
Granite-Docling Resolution Gate
- Model: Kimhi/granite-docling-res-gate-lora
- Approach: Autoregressive SFT with LoRA
- Use Case: Production deployment
Ethical Considerations
Intended Use
- Academic research and development
- Industrial document understanding applications
- Model benchmarking and evaluation
- Responsible AI research
Potential Risks
- Dataset reflects biases in source datasets
- May not generalize to specific document domains
- Quality varies based on document type
- Labels are proxy measures of resolution necessity
Mitigation
- Stratified sampling ensures class balance
- Multi-source composition reduces single-domain bias
- Regular validation against real-world tasks
- Transparent documentation of limitations
Limitations
- Domain Specificity: Primarily document-focused
- Language: Primarily English
- Quality Variation: Mixed-quality source data
- Labeling: Labels based on model performance, not human judgment
- Representation: May not include all document types equally
Citation
If you use this dataset, please cite:
@misc{kimhi2025carescontextawareresolutionselector,
title={CARES: Context-Aware Resolution Selector for VLMs},
author={Moshe Kimhi and Nimrod Shabtay and Raja Giryes and Chaim Baskin and Eli Schwartz},
year={2025},
eprint={2510.19496},
archivePrefix={arXiv},
primaryClass={cs.CV},
}
Acknowledgements
- Dataset sources: TextVQA, DocVQA, ChartQA, InfographicVQA, HME100K communities
- Infrastructure: Hugging Face Hub
- Hosting: Hugging Face Datasets
License
CC BY 4.0 - See LICENSE for details
Contact
For questions about this dataset, please open an issue on the CARES GitHub repository.
Dataset Version: 1.0 Last Updated: 2024 Recommended Citation: hardness_data_mix, Kimhi (2024)
- Downloads last month
- 11