| | --- |
| | dataset_info: |
| | - config_name: arxiv |
| | features: |
| | - name: content |
| | dtype: string |
| | splits: |
| | - name: train |
| | num_bytes: 89223183645.0 |
| | num_examples: 1558306 |
| | download_size: 40911186876 |
| | dataset_size: 89223183645.0 |
| | - config_name: documentation |
| | features: |
| | - name: project |
| | dtype: string |
| | - name: source |
| | dtype: string |
| | - name: language |
| | dtype: string |
| | - name: content |
| | dtype: string |
| | splits: |
| | - name: train |
| | num_bytes: 5421472234.0 |
| | num_examples: 59733 |
| | download_size: 1853451922 |
| | dataset_size: 5421472234.0 |
| | - config_name: ir_cpp |
| | features: |
| | - name: __index_level_0__ |
| | dtype: string |
| | - name: id |
| | dtype: string |
| | - name: content |
| | dtype: string |
| | splits: |
| | - name: train |
| | num_bytes: 102081135272.0 |
| | num_examples: 2916655 |
| | download_size: 26047978422 |
| | dataset_size: 102081135272.0 |
| | - config_name: ir_low_resource |
| | features: |
| | - name: __index_level_0__ |
| | dtype: string |
| | - name: id |
| | dtype: string |
| | - name: content |
| | dtype: string |
| | - name: size |
| | dtype: int64 |
| | splits: |
| | - name: train |
| | num_bytes: 10383382043.0 |
| | num_examples: 393988 |
| | download_size: 2464513603 |
| | dataset_size: 10383382043.0 |
| | - config_name: ir_python |
| | features: |
| | - name: id |
| | dtype: string |
| | - name: content |
| | dtype: string |
| | splits: |
| | - name: train |
| | num_bytes: 12446664464.0 |
| | num_examples: 154507 |
| | download_size: 3039297625 |
| | dataset_size: 12446664464.0 |
| | - config_name: ir_rust |
| | features: |
| | - name: __index_level_0__ |
| | dtype: string |
| | - name: id |
| | dtype: string |
| | - name: content |
| | dtype: string |
| | splits: |
| | - name: train |
| | num_bytes: 4764927851.0 |
| | num_examples: 32720 |
| | download_size: 1254786199 |
| | dataset_size: 4764927851.0 |
| | - config_name: issues |
| | features: |
| | - name: repo_name |
| | dtype: string |
| | - name: content |
| | dtype: string |
| | - name: issue_id |
| | dtype: string |
| | splits: |
| | - name: train |
| | num_bytes: 31219575534.38484 |
| | num_examples: 15549682 |
| | download_size: 16483899047 |
| | dataset_size: 31219575534.38484 |
| | - config_name: kaggle |
| | features: |
| | - name: content |
| | dtype: string |
| | - name: file_id |
| | dtype: string |
| | splits: |
| | - name: train |
| | num_bytes: 5228745262.0 |
| | num_examples: 580195 |
| | download_size: 2234440007 |
| | dataset_size: 5228745262.0 |
| | - config_name: lhq |
| | features: |
| | - name: content |
| | dtype: string |
| | - name: metadata |
| | struct: |
| | - name: difficulty |
| | dtype: string |
| | - name: field |
| | dtype: string |
| | - name: topic |
| | dtype: string |
| | splits: |
| | - name: train |
| | num_bytes: 751273849.0 |
| | num_examples: 7037500 |
| | download_size: 272913202 |
| | dataset_size: 751273849.0 |
| | - config_name: owm |
| | features: |
| | - name: url |
| | dtype: string |
| | - name: date |
| | dtype: timestamp[s] |
| | - name: metadata |
| | dtype: string |
| | - name: content |
| | dtype: string |
| | splits: |
| | - name: train |
| | num_bytes: 56294728333.0 |
| | num_examples: 6315233 |
| | download_size: 27160071916 |
| | dataset_size: 56294728333.0 |
| | - config_name: stackoverflow |
| | features: |
| | - name: date |
| | dtype: string |
| | - name: nb_tokens |
| | dtype: int64 |
| | - name: text_size |
| | dtype: int64 |
| | - name: content |
| | dtype: string |
| | splits: |
| | - name: train |
| | num_bytes: 35548199612.0 |
| | num_examples: 10404628 |
| | download_size: 17008831030 |
| | dataset_size: 35548199612.0 |
| | - config_name: wikipedia |
| | features: |
| | - name: content |
| | dtype: string |
| | - name: meta |
| | dtype: string |
| | - name: red_pajama_subset |
| | dtype: string |
| | splits: |
| | - name: train |
| | num_bytes: 21572720540.0 |
| | num_examples: 6630651 |
| | download_size: 12153445493 |
| | dataset_size: 21572720540.0 |
| | configs: |
| | - config_name: arxiv |
| | data_files: |
| | - split: train |
| | path: arxiv/train-* |
| | - config_name: documentation |
| | data_files: |
| | - split: train |
| | path: documentation/train-* |
| | - config_name: ir_cpp |
| | data_files: |
| | - split: train |
| | path: ir_cpp/train-* |
| | - config_name: ir_low_resource |
| | data_files: |
| | - split: train |
| | path: ir_low_resource/train-* |
| | - config_name: ir_python |
| | data_files: |
| | - split: train |
| | path: ir_python/train-* |
| | - config_name: ir_rust |
| | data_files: |
| | - split: train |
| | path: ir_rust/train-* |
| | - config_name: issues |
| | data_files: |
| | - split: train |
| | path: issues/train-* |
| | - config_name: kaggle |
| | data_files: |
| | - split: train |
| | path: kaggle/train-* |
| | - config_name: lhq |
| | data_files: |
| | - split: train |
| | path: lhq/train-* |
| | - config_name: owm |
| | data_files: |
| | - split: train |
| | path: owm/train-* |
| | - config_name: stackoverflow |
| | data_files: |
| | - split: train |
| | path: stackoverflow/train-* |
| | - config_name: wikipedia |
| | data_files: |
| | - split: train |
| | path: wikipedia/train-* |
| | --- |
| | |
| | # StarCoder2 Extras |
| |
|
| | This is the dataset of extra sources (besides Stack v2 code data) used to train the [StarCoder2](https://arxiv.org/abs/2402.19173) family of models. It contains the following subsets: |
| |
|
| | - Kaggle (`kaggle`): Kaggle notebooks from [Meta-Kaggle-Code](https://www.kaggle.com/datasets/kaggle/meta-kaggle-code) dataset, converted to scripts and prefixed with information on the Kaggle datasets used in the notebook. The file headers have a similar format to Jupyter Structured but the code content is only one single script. |
| | - StackOverflow (`stackoverflow`): stackoverflow conversations from this [StackExchange dump](https://archive.org/details/stackexchange). |
| | - Issues (`issues`): processed GitHub issues, same as the Stack v1 issues. |
| | - OWM (`owm`): the [Open-Web-Math](https://huggingface.co/datasets/open-web-math/open-web-math) dataset. |
| | - LHQ (`lhq`): Leandro's High quality dataset, it is a compilation of high quality code files from: APPS-train, CodeContests, GSM8K-train, GSM8K-SciRel, DeepMind-Mathematics, Rosetta-Code, MultiPL-T, ProofSteps, ProofSteps-lean. |
| | - Wiki (`wikipedia`): the English subset of the Wikipedia dump in [RedPajama](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T). |
| | - ArXiv (`arxiv`): the ArXiv subset of [RedPajama](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) dataset, further processed the dataset only to retain latex source files and remove preambles, comments, macros, and bibliographies from these files. |
| | - IR_language (`ir_cpp`, `ir_low_resource`, `ir_python`, `ir_rust`): these are intermediate representations of Python, Rust, C++ and other low resource languages. |
| | - Documentation (`documentation`): documentation of popular libraries. |
| |
|
| | For more details on the processing of each subset, check the [StarCoder2 paper](https://arxiv.org/abs/2402.19173) or The Stack v2 [GitHub repository](https://github.com/bigcode-project/the-stack-v2/). |
| |
|
| | ## Usage |
| |
|
| | ```python |
| | from datasets import load_dataset |
| | |
| | # replace `kaggle` with one of the config names listed above |
| | ds = load_dataset("bigcode/starcoder2data-extras", "kaggle", split="train") |
| | ``` |
| |
|
| | ## Citation |
| |
|
| | ``` |
| | @article{lozhkov2024starcoder, |
| | title={Starcoder 2 and the stack v2: The next generation}, |
| | author={Lozhkov, Anton and Li, Raymond and Allal, Loubna Ben and Cassano, Federico and Lamy-Poirier, Joel and Tazi, Nouamane and Tang, Ao and Pykhtar, Dmytro and Liu, Jiawei and Wei, Yuxiang and others}, |
| | journal={arXiv preprint arXiv:2402.19173}, |
| | year={2024} |
| | } |
| | ``` |