Dataset Viewer (First 5GB)
Auto-converted to Parquet Duplicate
Search is not available for this dataset
The dataset viewer is not available for this split.
Parquet error: Scan size limit exceeded: attempted to read 419460927 bytes, limit is 300000000 bytes Make sure that 1. the Parquet files contain a page index to enable random access without loading entire row groups2. otherwise use smaller row-group sizes when serializing the Parquet files
Error code:   TooBigContentError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

PIVtools Turbulent Channel Validation Dataset

Synthetic particle-image-velocimetry (PIV) images of a turbulent channel flow at Re_τ ≈ 1000, paired with DNS-derived ground-truth statistics. Designed to benchmark PIV algorithms end-to-end against a reference dataset of known answer.

Two cases are provided:

  • Case A (clean): 85 000 particles per 2048 × 2048 image (≈ 5.2 ppw at 16 × 16 windows), no noise. Sets an upper bound on PIV accuracy.
  • Case B (noisy): 22 000 particles per image (≈ 1.3 ppw), Gaussian sensor noise (mean 80, std 16, SNR ≈ 8). Realistic experimental conditions.

Each case contains 4 000 image pairs in both planar and stereo geometries. The stereo cameras are placed symmetrically at ± 45° from the sheet normal, in a side-scatter arrangement. Ground truth is provided separately for each case — both derive from the same underlying JHTDB channel snapshots but with their respective particle counts, so finite-sample statistics are self-consistent.

Companion to the PIVtools software paper (SoftwareX, submitted). The dataset is self-contained: drop it next to a PIVtools install and the benchmark scripts reproduce every validation figure in the paper.

Quickstart

# 1. Install pivtools (C extensions are pre-built in the PyPI wheel)
pip install pivtools

# 2. Download this dataset
hf download MTT69/TurbulentChannel --repo-type dataset --local-dir ./tc

# 3. Process the planar noisy images (ensemble PIV — example)
pivtools-cli init --output ./work/config.yaml
# edit config.yaml to point sources at ./tc/planar_noisy
pivtools-cli ensemble --config ./work/config.yaml

# 4. Benchmark against DNS  (use ground_truth/clean for Case A, ground_truth/noisy for Case B)
python ./tc/scripts/benchmark_comparison.py \
    --mode ensemble \
    --gt-dir ./tc/ground_truth/noisy \
    --ensemble-dir ./work/calibrated_piv/4000/Cam1/ensemble \
    --num-frames 4000 \
    --output-dir ./work/validation

Contents

MTT69/TurbulentChannel/
├── README.md                         (this file)
├── LICENSE                           (CC-BY-4.0)
├── ground_truth/
│   ├── clean/direct_stats.mat        DNS statistics for Case A (85k particles)
│   └── noisy/direct_stats.mat        DNS statistics for Case B (22k particles)
├── planar_clean/                     Case A planar images (4000 pairs, 85k particles)
│   └── B00001_A.tif … B04000_B.tif   2048 × 2048 TIFF, flat at root
├── planar_noisy/                     Case B planar images (4000 pairs, 22k particles)
│   ├── B00001_A.tif … B04000_B.tif
│   └── calibration_boards/           20 synthetic dotboard calibration images
│                                     (shared between Case A and Case B)
├── stereo_clean/                     Case A stereo images (4000 pairs × 2 cameras)
│   ├── camera1/                      cam 1 TIFFs
│   ├── camera2/                      cam 2 TIFFs
│   ├── mask_Cam1.mat                 pixel-space masks
│   └── mask_Cam2.mat
├── stereo_noisy/                     Case B stereo images (4000 pairs × 2 cameras)
│   ├── camera1/
│   ├── camera2/
│   ├── calibration/
│   │   ├── cam1/                     20 stereo dotboard images, cam 1
│   │   └── cam2/                     20 stereo dotboard images, cam 2
│   │                                 (shared between Case A and Case B)
│   ├── mask_Cam1.mat
│   └── mask_Cam2.mat
└── scripts/
    ├── benchmark_comparison.py       Planar + ensemble vs DNS
    ├── stereo_benchmark_comparison.py Stereo 3-component + 6 stresses vs DNS
    ├── cross_method_comparison.py    Multi-method overlay figures
    ├── paper_figures.py              Combined clean + noisy paper figures
    ├── tcf_direct_stats.py           Recompute ground truth from JHTDB particles
    └── sig_configs/                  EUROSIG configuration files (.cdl)

Calibration boards are shared. To avoid duplicate uploads, the calibration images for both Case A and Case B live at planar_noisy/calibration_boards/ (planar) and stereo_noisy/calibration/{cam1,cam2}/ (stereo). When processing Case A, point your PIVtools config at those same calibration paths.

Image specifications

Parameter Value
Image size 2048 × 2048 px, 16-bit TIFF
Particle diameter 3 px
Laser sheet thickness 16 px (1.2 mm physical)
Number of pairs 4000
Case A particle count 85 000 per image (≈ 5.2 ppw at 16 × 16 windows), no noise
Case B particle count 22 000 per image (≈ 1.3 ppw at 16 × 16 windows)
Case B noise Gaussian, mean = 80, std = 16, SNR ≈ 8
Stereo geometry Two cameras at ± 45° from the sheet normal (side-scatter arrangement)
dt Matches JHTDB snapshot spacing (see CDL configs)

Ground truth

Two ground-truth files are provided, one per case, each in its own subdirectory so the benchmark scripts can point at them directly via --gt-dir:

  • ground_truth/clean/direct_stats.mat — Case A reference (85 000 particle trajectories)
  • ground_truth/noisy/direct_stats.mat — Case B reference (22 000 trajectories)

Both are computed directly from the JHTDB particle position snapshots used to render the corresponding images. Benchmark Case A PIV against the clean file and Case B against the noisy one — finite-sample statistics are self-consistent within each case. Both share this schema:

Key Shape Description
y_plus (N,) wall-normal coordinate, wall units
U_plus (N, 3) mean velocity [U, V, W] in wall units
stress_plus (N, 3, 3) Reynolds stress tensor in wall units
stress_ci_lo, stress_ci_hi (N, 3, 3) 95% confidence interval
umean_ci_lo, umean_ci_hi (N, 3) 95% CI for mean velocity
u_tau scalar friction velocity (mm/s)
delta_nu scalar viscous length scale (mm)
Re_tau scalar friction Reynolds number

The Case B ground truth is self-consistent with the 22 000-particle rendering — finite-sample statistics from the subsampled particle set, not the full DNS. Benchmark Case B PIV against Case B ground truth.

How this was generated

Synthetic images are rendered from JHTDB turbulent-channel particle trajectories using the EUROSIG / EUROPIV synthetic image generator. The configuration files in scripts/sig_configs/ are the authoritative build instructions:

Configuration Role
sigconf_planar_noisy_A.cdl Planar frame A, 22k particles, noise pattern A
sigconf_planar_noisy_B.cdl Planar frame B, 22k particles, noise pattern B
SIGconf_Stereo_cam1_noisy_A.cdl, ..._B.cdl Stereo cam 1, frames A and B
SIGconf_Stereo_cam2_noisy_A.cdl, ..._B.cdl Stereo cam 2, frames A and B
sigconf_planar.cdl, SIGconf_Stereo_cam{1,2}.cdl Case A planar + stereo (85k particles, no noise)

To regenerate images bit-for-bit, install EUROSIG and invoke each .cdl with its associated particle-position files from JHTDB. See the SIG documentation for build instructions.

Scripts

benchmark_comparison.py — single-method benchmark

Compares planar or ensemble PIV against the DNS ground truth; produces U+, Reynolds stress, residual, trace invariant, and noise-decomposition plots.

python scripts/benchmark_comparison.py \
    --mode ensemble \
    --gt-dir ./ground_truth/noisy \
    --ensemble-dir <path/to/your/ensemble_result_directory> \
    --num-frames 4000 \
    --output-dir ./out
Flag Description
--mode / -m instantaneous or ensemble
--runs / -r Comma-separated 0-based pass indices (e.g. 2,3)
--windows / -w Labels for those passes (e.g. 32,16)
--gt-dir / -g Directory containing direct_stats.mat — e.g. ./ground_truth/noisy or ./ground_truth/clean (required)
--base-dir / -b PIV results base (instantaneous mode)
--ensemble-dir / -e Direct path to ensemble result directory
--num-frames / -n Frame count subdirectory (default 1000; use 4000 for this dataset)
--output-dir / -o Output directory
--y-plus-offset / -y Additional y+ offset on top of hardcoded +1
--show-fit-lines Overlay log-law and viscous sublayer curves

stereo_benchmark_comparison.py — stereo 3C + 6-stress benchmark

Uses LaTeX for labels (text.usetex=True); requires MiKTeX / TeXLive.

python scripts/stereo_benchmark_comparison.py \
    --gt-dir ./ground_truth/noisy \
    --stereo-base <path/to/your/stereo_results> \
    --num-frames 4000 \
    --output-dir ./out

cross_method_comparison.py — multi-method overlay

Publication-quality plots comparing one pass from each of instantaneous, ensemble, and stereo against DNS on the same axes. Okabe-Ito colourblind palette.

python scripts/cross_method_comparison.py \
    --gt-dir ./ground_truth/noisy \
    --output-dir ./out \
    --inst-stats <path/to/instantaneous/mean_stats.mat> \
    --ens-dir <path/to/ensemble_dir> \
    --stereo-stats <path/to/stereo/mean_stats.mat>

paper_figures.py — combined Case A + Case B figures

Reproduces the figures in the PIVtools paper: Case A (open symbols) and Case B (filled symbols) overlaid. Any combination of paths may be supplied — the script plots whichever it receives.

# Case B only (what this dataset ships today)
python scripts/paper_figures.py \
    --gt-noisy-dir ./ground_truth \
    --inst-noisy-stats <path/to/noisy/instantaneous/mean_stats.mat> \
    --ens-noisy-dir <path/to/noisy/ensemble_dir> \
    --stereo-noisy-stats <path/to/noisy/stereo/mean_stats.mat> \
    --output-dir ./out

tcf_direct_stats.py — recompute ground truth

If you regenerate the synthetic images via EUROSIG, this script recomputes direct_stats.mat from the underlying JHTDB particle position files (B*_A.data, B*_B.data).

python scripts/tcf_direct_stats.py \
    --data-dir <path/to/particle_positions> \
    --output-dir ./ground_truth

Unit conventions

Quantity PIVtools storage Benchmark display
Velocity m/s mm/s (× 1000)
Reynolds stress (m/s)² (mm/s)² (× 1e6)
Spatial coordinates mm wall units y⁺ = y / δ_ν

Masks

stereo_noisy/mask_Cam{1,2}.mat hold pixel-space boolean masks (same shape as images) that exclude regions outside the valid field of view. PIVtools loads them automatically when configured with masking.enabled: true and mask_file_pattern: mask_Cam{cam}.mat.

Citation

If you use this dataset, please cite both the PIVtools paper and the underlying DNS source.

@article{taylor_pivtools,
  title={PIVtools: an open-source PIV framework with integrated planar, stereoscopic, and ensemble pipelines},
  author={Taylor, M.T. and Lawson, J.M. and Ganapathisubramani, B.},
  journal={SoftwareX},
  note={submitted}
}

@article{lee2015direct,
  title={Direct numerical simulation of turbulent channel flow up to Re_tau = 5200},
  author={Lee, M. and Moser, R.D.},
  journal={J. Fluid Mech.},
  year={2015}
}

@article{li2008public,
  title={A public turbulence database cluster and applications to study Lagrangian evolution of velocity increments in turbulence},
  author={Li, Y. and others},
  journal={J. Turbulence},
  year={2008}
}

DNS reference data is from the Johns Hopkins Turbulence Database (JHTDB).

License

CC-BY-4.0 — free to use, modify, and redistribute with attribution to the PIVtools paper.

The DNS ground truth is derived from publicly accessible JHTDB data and is redistributed here under the same permissive terms; consult the JHTDB usage policy (http://turbulence.pha.jhu.edu) for their citation requirements.

Downloads last month
121