You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

πŸš— BATON

Behavioral Analysis of Transition and Operation in Naturalistic Driving

     

A large-scale multimodal benchmark for bidirectional human–DAS control transition in naturalistic driving
Submitted to ACM Multimedia 2026



🎬 Live Preview


Continuous sequence β€” cabin fisheye Β· front view Β· 5 fps

8 CAN/IMU sensor streams
cycling through all channels

Daytime time-lapse β€” front Β· cabin Β· 2-min intervals

Nighttime driving β€” time-lapse with ⬆ DAS Handover and ↩ Human Takeover event highlights

πŸ“Š Dataset at a Glance

🌍 Routes πŸ‘€ Drivers πŸš™ Car Models ⏱️ Duration πŸ”„ Handover Events
380 127 84 136.6 h 2,892
πŸ€– DAS Driving πŸ§‘ Human Driving ⬆️ DAS Handover ↩️ Human Takeover 🌍 Coverage
52.5% 47.5% 1,460 1,432 6 Continents

Global distribution of participants, per-driver duration, and handover event breakdown.

πŸ”¬ Data Collection & Modalities

Setup: Non-intrusive plug-and-play OBD-II dongle + dual cameras. Drivers use their own vehicles during real daily commutes β€” no lab, no script.

Component Spec
πŸ“‘ OBD-II Dongle CAN-bus at 100 Hz
πŸ“· Front camera 526Γ—330 Β· H.264 Β· 20 fps
πŸŽ₯ Cabin fisheye 1928Γ—1208 Β· HEVC Β· 20 fps
πŸ›°οΈ GPS 10 Hz

9 synchronized modalities:

  • vehicle_dynamics.csv β€” speed, accel, steering, pedals, DAS status
  • planning.csv β€” DAS curvature, lane change intent
  • radar.csv β€” lead vehicle distance & relative speed
  • driver_state.csv β€” face pose, eye openness, awareness
  • imu.csv β€” 3-axis accel & gyro at 100 Hz
  • gps.csv β€” coordinates, heading
  • localization.csv β€” road curvature, lane position
  • qcamera.mp4 β€” front-view video
  • dcamera.mp4 β€” in-cabin fisheye video

πŸ“· Front Β· Day

πŸ“· Front Β· Night

⬆️ DAS Handover

πŸŽ₯ Cabin Β· Day

πŸŽ₯ Cabin Β· Night

↩️ Takeover

Aligned multimodal streams around a HANDOVER event: cabin video Β· front video Β· GPS trajectory Β· sensor signals.

πŸ† Benchmark Tasks


Task Description Samples Labels Primary Metric
🎯 Task 1 Driving action recognition (7-class) 979,809 Cruising · Car Following · Accelerating · Braking · Lane Change · Turning · Stopped Macro-F1
⬆️ Task 2 Handover prediction (Humanβ†’DAS) 56,564 Handover (14.9%) Β· No Handover AUPRC
↩️ Task 3 Takeover prediction (DASβ†’Human) 71,079 Takeover (11.9%) Β· No Takeover AUPRC

Evaluation protocol: Cross-driver split Β· 5-second input window Β· 3-second prediction horizon Β· 3 seeds (42, 123, 7)


πŸ“ Repository Structure

BATON/
β”œβ”€β”€ benchmark/                   # Benchmark data and generation code
β”‚   β”œβ”€β”€ generate_benchmark.py        # Full benchmark construction pipeline
β”‚   β”œβ”€β”€ routes.csv                   # Route metadata (380 routes)
β”‚   β”œβ”€β”€ action_labels.csv            # 1 Hz action labels
β”‚   β”œβ”€β”€ task1_action_samples.csv     # Task 1 samples
β”‚   β”œβ”€β”€ task2_activation_samples_h{1,3,5}.csv   # Task 2 at 3 horizons
β”‚   β”œβ”€β”€ task3_takeover_samples_h{1,3,5}.csv     # Task 3 at 3 horizons
β”‚   β”œβ”€β”€ split_cross_driver.json      # Primary evaluation split
β”‚   β”œβ”€β”€ split_cross_vehicle.json     # Cross-vehicle split
β”‚   β”œβ”€β”€ split_random.json            # Random split
β”‚   └── benchmark_protocol.md        # Detailed protocol specification
β”‚
β”œβ”€β”€ baseline/                    # Training and evaluation code
β”‚   β”œβ”€β”€ config.py                    # Paths, modality definitions, hyperparameters
β”‚   β”œβ”€β”€ dataset.py                   # PyTorch dataset for all tasks
β”‚   β”œβ”€β”€ models.py                    # GRU and TCN with gated fusion
β”‚   β”œβ”€β”€ metrics.py                   # Evaluation metrics
β”‚   β”œβ”€β”€ train_nn.py                  # Neural network training (GRU / TCN)
β”‚   β”œβ”€β”€ train_classical.py           # XGBoost and LR baselines
β”‚   β”œβ”€β”€ run_vlm.py                   # Zero-shot VLM baselines (Gemini / GPT-4o)
β”‚   β”œβ”€β”€ vlm_prompts.py               # VLM prompt construction
β”‚   └── collect_results.py           # Aggregate and print result tables
β”‚
└── data_processing/             # Feature extraction scripts
    β”œβ”€β”€ extract_front_video_features.py    # EfficientNet-B0 front video
    β”œβ”€β”€ extract_cabin_video_features.py    # EfficientNet-B0 cabin video
    β”œβ”€β”€ extract_clip_features.py           # CLIP ViT-B/32 features
    β”œβ”€β”€ video_utils.py                     # Shared video decoding utilities
    └── gps_semantic_enrichment.py         # GPS β†’ road context features

πŸš€ Quick Start

1. Get the data

# Sample dataset (~few GB, all modalities, 43 routes)
git lfs install
git clone https://huggingface.co/datasets/HenryYHW/BATON-Sample

# Full dataset β€” download via HuggingFace Hub
python -c "
from huggingface_hub import snapshot_download
snapshot_download('HenryYHW/BATON', repo_type='dataset', local_dir='./data')
"

2. Preprocess signals

cd baseline
python preprocess.py

3. Extract video features

cd data_processing

# EfficientNet-B0 features (used in main baselines)
python extract_front_video_features.py
python extract_cabin_video_features.py

# CLIP ViT-B/32 features (optional)
python extract_clip_features.py

4. Train baselines

cd baseline

# GRU on all modalities β€” Task 1
python train_nn.py --task task1 --modality Full-All --model gru --seed 42

# XGBoost on structured signals β€” Task 2
python train_classical.py --task task2 --model xgb --seed 42

# TCN ablation β€” Task 3, sensors only
python train_nn.py --task task3 --modality Text --model tcn --seed 42

# Zero-shot VLM baseline (GPT-4o or Gemini 2.5 Flash)
python run_vlm.py --model gpt4o --task task1
python run_vlm.py --model gemini --task task2

5. Collect results

python collect_results.py   # prints all result tables

πŸ“ Evaluation Protocol

Setting Value
Primary split Cross-driver (disjoint drivers in train / val / test)
Additional splits Cross-vehicle, Random
Input window 5 seconds
Prediction horizon 1 s, 3 s, 5 s (main: 3 s)
Random seeds 42, 123, 7 β€” report 3-seed average
Task 1 metric Macro-F1
Task 2 / 3 metrics AUPRC (primary), AUC-ROC, F1

πŸ“‘ Data Access

Resource Link
πŸ“¦ Full Dataset HuggingFace β€” HenryYHW/BATON
πŸ” Sample Dataset (43 routes) HuggingFace β€” HenryYHW/BATON-Sample
πŸ“„ arXiv Paper arxiv.org/abs/2604.07263

πŸ“œ Citation

@article{wang2026baton,
  title   = {BATON: A Multimodal Benchmark for Bidirectional Automation Transition
             Observation in Naturalistic Driving},
  author  = {Wang, Yuhang and Xu, Yiyao and Yang, Chaoyun and Li, Lingyao
             and Sun, Jingran and Zhou, Hao},
  journal = {arXiv preprint arXiv:2604.07263},
  year    = {2026}
}

πŸ“„ License

This dataset is released for academic research use only under CC BY-NC 4.0 (Creative Commons Attribution–NonCommercial 4.0 International).

You are free to use and redistribute the data for non-commercial research, and to adapt or build upon it for non-commercial purposes β€” provided that:

  • Attribution β€” You must cite the BATON paper (see Citation above) in any publication or work that uses this dataset.
  • Non-Commercial β€” Commercial use of this dataset or any derivative is strictly prohibited.
  • Academic Use Only β€” This dataset is intended solely for academic research. Use in any commercial product, service, or application is not permitted.

For commercial licensing inquiries, please contact the authors.


πŸ”— Paper  Β·  Full Dataset  Β·  Sample Dataset  Β·  GitHub
Downloads last month
26

Paper for HenryYHW/BATON