matthewkenney's picture
Update dataset card with improved documentation
244d646 verified
metadata
pretty_name: ArXiv Deep Learning Python Research Code
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
dataset_info:
  features:
    - name: repo
      dtype: string
    - name: file
      dtype: string
    - name: code
      dtype: string
    - name: file_length
      dtype: int64
    - name: avg_line_length
      dtype: float64
    - name: max_line_length
      dtype: int64
    - name: extension_type
      dtype: string
  splits:
    - name: train
      num_bytes: 3590067176.125193
      num_examples: 391496
  download_size: 1490724325
  dataset_size: 3590067176.125193
language:
  - en
license: other
size_categories:
  - 100K<n<1M
tags:
  - code
  - deep-learning
  - arxiv
  - research
  - python
task_categories:
  - text-generation

ArXiv Deep Learning Python Research Code

A curated corpus of Python source code files extracted from GitHub repositories referenced in ArXiv papers. Contains 391,496 files (1.49 GB) filtered to deep learning frameworks, designed for training and evaluating Code LLMs on research-grade code.

Dataset Summary

Statistic Value
Total files 391,496
Total size 1.49 GB
Source repos 34,099
Time span ArXiv inception through July 2023

Dataset Structure

Field Type Description
repo string GitHub repository name
file string File path in the repository
code string File contents
file_length int64 Number of characters in the file
avg_line_length float64 Average line length
max_line_length int64 Maximum line length
extension_type string File extension

Usage

from datasets import load_dataset

# full dataset
ds = load_dataset("AlgorithmicResearchGroup/arxiv_deep_learning_python_research_code", split="train")

# streaming
ds = load_dataset("AlgorithmicResearchGroup/arxiv_deep_learning_python_research_code", streaming=True, split="train")
for sample in ds:
    print(sample["repo"], sample["file"])
    break

Data Collection

34,099 active GitHub repository names were extracted from ArXiv papers from its inception through July 21st, 2023, totaling 773 GB of compressed GitHub repositories.

These repositories were filtered to files mentioning any of the following frameworks: torch, jax, flax, stax, haiku, keras, fastai, xgboost, caffe, mxnet, yielding 1.4 million files which were further filtered to the final 391k.

Sensitive Information

The dataset may contain emails, IP addresses, and API/SSH keys that were previously published in public GitHub repositories.

Related Resources

Citation

@misc{arxiv_deep_learning_python_research_code,
    title={ArXiv Deep Learning Python Research Code},
    author={Matthew Kenney},
    year={2023},
    publisher={Hugging Face},
    url={https://huggingface.co/datasets/AlgorithmicResearchGroup/arxiv_deep_learning_python_research_code}
}