squirelmail/model-BotDetect-CAPTCHA-Generator
Image-Text-to-Text β’ Updated
β’ 1
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 82, in _split_generators
raise ValueError(
ValueError: The TAR archives of the dataset should be in WebDataset format, but the files in the archive don't share the same prefix or the same types.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
This repository contains CAPTCHA datasets for training CRNN+CTC models. Each archive dataset_*.tar.gz includes 60 styles (from BotDetect Captcha), structured as folders style0 through style59. Each style contains N images depending on the archive name.
dataset_500.tar.gz β 500 images per style (β 30,000 total)dataset_1000.tar.gz β 1,000 images per style (β 60,000 total)dataset_5000.tar.gz β 5,000 images per style (β 300,000 total)dataset_10000.tar.gz β 10,000 images per style (β 600,000 total)dataset_20000.tar.gz β 20,000 images per style (β 1,200,000 total)dataset_1000_rand.tar.gz β randomized variant with 1,000 images per styleNaming convention: dataset_{N}.tar.gz means each styleX folder holds exactly N PNG images.
/path/to/dataset
βββ style0/
β βββ A1B2C.png
β βββ 9Z7QK.png
β βββ ...
βββ style1/
β βββ K9NO2.png
β βββ ...
βββ ...
βββ style59/
K9NO2.png.50Γ250 pixels (H=50, W=250), grayscale PNG.^[A-Z0-9]{5}$ (exactly 5 chars, uppercase & digits).# example: extract into /workspace/dataset_1000
mkdir -p /workspace/dataset_1000
tar -xvzf dataset_1000.tar.gz -C /workspace/dataset_1000
# total PNG files (depth 2 to only count inside style folders)
find /workspace/dataset_1000 -maxdepth 2 -type f -name '*.png' | wc -l
# per-style counts without a for-loop (prints "count styleX")
find /workspace/dataset_1000 -mindepth 2 -maxdepth 2 -type f -name '*.png' \
| awk -F/ '{print $(NF-2)}' | sort | uniq -c | sort -k2
Expected totals:
dataset_500 β 500 Γ 60 = 30,000 filesdataset_1000 β 60,000 filesdataset_5000 β 300,000 filesdataset_10000 β 600,000 filesdataset_20000 β 1,200,000 files# list filenames that violate the strict 5-char uppercase/digit rule
find /workspace/dataset_1000 -type f -name '*.png' \
| awk -F/ '{print $NF}' | sed 's/\.png$//' \
| grep -vE '^[A-Z0-9]{5}$' | head
CSV report via Python (pandas):
import os, re
import pandas as pd
from glob import glob
root = "/workspace/dataset_1000"
rows = []
for s in range(60):
for p in glob(os.path.join(root, f"style{s}", "*.png")):
rows.append({"style": f"style{s}", "filepath": p, "label": os.path.basename(p)[:-4]})
df = pd.DataFrame(rows)
bad = df[~df["label"].str.match(r"^[A-Z0-9]{5}$", na=True)]
print("Invalid labels:", len(bad))
if len(bad):
bad.to_csv("invalid_labels.csv", index=False)
import os
from glob import glob
import pandas as pd
def load_dataset(root_dir):
data = []
for style_id in range(60):
folder = os.path.join(root_dir, f"style{style_id}")
for path in glob(os.path.join(folder, "*.png")):
label = os.path.splitext(os.path.basename(path))[0]
data.append((path, label, f"style{style_id}"))
df = pd.DataFrame(data, columns=["filepath", "label", "style"])
# enforce strict label rule
df = df[df["label"].str.match(r"^[A-Z0-9]{5}$")]
return df
df = load_dataset("/workspace/dataset_1000")
print(df.head(), len(df))
Add new files without overwriting existing ones:
rsync -av \
--ignore-existing \
--include='style[0-5][0-9]/' \
--include='style[0-5][0-9]/*.png' \
--exclude='*' \
/workspace/dataset_10000/ /workspace/dataset_20000/
Overwrite only if source is newer:
rsync -av --update \
--include='style[0-5][0-9]/' \
--include='style[0-5][0-9]/*.png' \
--exclude='*' \
/workspace/dataset_10000/ /workspace/dataset_20000/
Optional: keep SHA256 for integrity.
sha256sum dataset_1000.tar.gz > dataset_1000.tar.gz.sha256
sha256sum -c dataset_1000.tar.gz.sha256
(H, W) = (50, 250), grayscale.For questions, dataset issues, or custom subsets, please open an issue in this repository.