html_url stringlengths 48 51 | title stringlengths 5 268 | comments stringlengths 70 51.8k | body stringlengths 0 29.8k | comment_length int64 16 1.52k | text stringlengths 164 54.1k |
|---|---|---|---|---|---|
https://github.com/huggingface/datasets/issues/2945 | Protect master branch | @lhoestq now the 2 are implemented.
Please note that for the the second protection, finally I have chosen to protect the master branch only from **merge commits** (see update comment above), so no need to disable/re-enable the protection on each release (direct commits, different from merge commits, can be pushed to... | After accidental merge commit (91c55355b634d0dc73350a7ddee1a6776dbbdd69) into `datasets` master branch, all commits present in the feature branch were permanently added to `datasets` master branch history, as e.g.:
- 00cc036fea7c7745cfe722360036ed306796a3f2
- 13ae8c98602bbad8197de3b9b425f4c78f582af1
- ...
I propo... | 64 | Protect master branch
After accidental merge commit (91c55355b634d0dc73350a7ddee1a6776dbbdd69) into `datasets` master branch, all commits present in the feature branch were permanently added to `datasets` master branch history, as e.g.:
- 00cc036fea7c7745cfe722360036ed306796a3f2
- 13ae8c98602bbad8197de3b9b425f4c78f... |
https://github.com/huggingface/datasets/issues/2943 | Backwards compatibility broken for cached datasets that use `.filter()` | Hi ! I guess the caching mechanism should have considered the new `filter` to be different from the old one, and don't use cached results from the old `filter`.
To avoid other users from having this issue we could make the caching differentiate the two, what do you think ? | ## Describe the bug
After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with
`ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='in... | 50 | Backwards compatibility broken for cached datasets that use `.filter()`
## Describe the bug
After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with
`ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=No... |
https://github.com/huggingface/datasets/issues/2943 | Backwards compatibility broken for cached datasets that use `.filter()` | If it's easy enough to implement, then yes please 😄 But this issue can be low-priority, since I've only encountered it in a couple of `transformers` CI tests. | ## Describe the bug
After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with
`ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='in... | 28 | Backwards compatibility broken for cached datasets that use `.filter()`
## Describe the bug
After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with
`ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=No... |
https://github.com/huggingface/datasets/issues/2943 | Backwards compatibility broken for cached datasets that use `.filter()` | Well it can cause issue with anyone that updates `datasets` and re-run some code that uses filter, so I'm creating a PR | ## Describe the bug
After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with
`ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='in... | 22 | Backwards compatibility broken for cached datasets that use `.filter()`
## Describe the bug
After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with
`ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=No... |
https://github.com/huggingface/datasets/issues/2943 | Backwards compatibility broken for cached datasets that use `.filter()` | I just merged a fix, let me know if you're still having this kind of issues :)
We'll do a release soon to make this fix available | ## Describe the bug
After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with
`ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='in... | 27 | Backwards compatibility broken for cached datasets that use `.filter()`
## Describe the bug
After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with
`ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=No... |
https://github.com/huggingface/datasets/issues/2937 | load_dataset using default cache on Windows causes PermissionError: [WinError 5] Access is denied | Hi @daqieq, thanks for reporting.
Unfortunately, I was not able to reproduce this bug:
```ipython
In [1]: from datasets import load_dataset
...: ds = load_dataset('wiki_bio')
Downloading: 7.58kB [00:00, 26.3kB/s]
Downloading: 2.71kB [00:00, ?B/s]
Using custom data configuration default
Downloading and prep... | ## Describe the bug
Standard process to download and load the wiki_bio dataset causes PermissionError in Windows 10 and 11.
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset('wiki_bio')
```
## Expected results
It is expected that the dataset downloads without any er... | 109 | load_dataset using default cache on Windows causes PermissionError: [WinError 5] Access is denied
## Describe the bug
Standard process to download and load the wiki_bio dataset causes PermissionError in Windows 10 and 11.
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_datas... |
https://github.com/huggingface/datasets/issues/2937 | load_dataset using default cache on Windows causes PermissionError: [WinError 5] Access is denied | Thanks @albertvillanova for looking at it! I tried on my personal Windows machine and it downloaded just fine.
Running on my work machine and on a colleague's machine it is consistently hitting this error. It's not a write access issue because the `.incomplete` directory is written just fine. It just won't rename an... | ## Describe the bug
Standard process to download and load the wiki_bio dataset causes PermissionError in Windows 10 and 11.
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset('wiki_bio')
```
## Expected results
It is expected that the dataset downloads without any er... | 194 | load_dataset using default cache on Windows causes PermissionError: [WinError 5] Access is denied
## Describe the bug
Standard process to download and load the wiki_bio dataset causes PermissionError in Windows 10 and 11.
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_datas... |
https://github.com/huggingface/datasets/issues/2934 | to_tf_dataset keeps a reference to the open data somewhere, causing issues on windows | I did some investigation and, as it seems, the bug stems from [this line](https://github.com/huggingface/datasets/blob/8004d7c3e1d74b29c3e5b0d1660331cd26758363/src/datasets/arrow_dataset.py#L325). The lifecycle of the dataset from the linked line is bound to one of the returned `tf.data.Dataset`. So my (hacky) solution... | To reproduce:
```python
import datasets as ds
import weakref
import gc
d = ds.load_dataset("mnist", split="train")
ref = weakref.ref(d._data.table)
tfd = d.to_tf_dataset("image", batch_size=1, shuffle=False, label_cols="label")
del tfd, d
gc.collect()
assert ref() is None, "Error: there is at least one refe... | 99 | to_tf_dataset keeps a reference to the open data somewhere, causing issues on windows
To reproduce:
```python
import datasets as ds
import weakref
import gc
d = ds.load_dataset("mnist", split="train")
ref = weakref.ref(d._data.table)
tfd = d.to_tf_dataset("image", batch_size=1, shuffle=False, label_cols="lab... |
https://github.com/huggingface/datasets/issues/2924 | "File name too long" error for file locks | Hi, the filename here is less than 255
```python
>>> len("_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock")
154
```
so not sure why it's considered too long for your filesystem.
(also note that the lock file... | ## Describe the bug
Getting the following error when calling `load_dataset("gar1t/test")`:
```
OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.inc... | 39 | "File name too long" error for file locks
## Describe the bug
Getting the following error when calling `load_dataset("gar1t/test")`:
```
OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53... |
https://github.com/huggingface/datasets/issues/2924 | "File name too long" error for file locks | Yes, you're right! I need to get you more info here. Either there's something going with the name itself that the file system doesn't like (an encoding that blows up the name length??) or perhaps there's something with the path that's causing the entire string to be used as a name. I haven't seen this on any system be... | ## Describe the bug
Getting the following error when calling `load_dataset("gar1t/test")`:
```
OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.inc... | 67 | "File name too long" error for file locks
## Describe the bug
Getting the following error when calling `load_dataset("gar1t/test")`:
```
OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53... |
https://github.com/huggingface/datasets/issues/2918 | `Can not decode content-encoding: gzip` when loading `scitldr` dataset with streaming | Hi @SBrandeis, thanks for reporting! ^^
I think this is an issue with `fsspec`: https://github.com/intake/filesystem_spec/issues/389
I will ask them if they are planning to fix it... | ## Describe the bug
Trying to load the `"FullText"` config of the `"scitldr"` dataset with `streaming=True` raises an error from `aiohttp`:
```python
ClientPayloadError: 400, message='Can not decode content-encoding: gzip'
```
cc @lhoestq
## Steps to reproduce the bug
```python
from datasets import load_... | 26 | `Can not decode content-encoding: gzip` when loading `scitldr` dataset with streaming
## Describe the bug
Trying to load the `"FullText"` config of the `"scitldr"` dataset with `streaming=True` raises an error from `aiohttp`:
```python
ClientPayloadError: 400, message='Can not decode content-encoding: gzip'
```... |
https://github.com/huggingface/datasets/issues/2918 | `Can not decode content-encoding: gzip` when loading `scitldr` dataset with streaming | Code to reproduce the bug: `ClientPayloadError: 400, message='Can not decode content-encoding: gzip'`
```python
In [1]: import fsspec
In [2]: import json
In [3]: with fsspec.open('https://raw.githubusercontent.com/allenai/scitldr/master/SciTLDR-Data/SciTLDR-FullText/test.jsonl', encoding="utf-8") as f:
...:... | ## Describe the bug
Trying to load the `"FullText"` config of the `"scitldr"` dataset with `streaming=True` raises an error from `aiohttp`:
```python
ClientPayloadError: 400, message='Can not decode content-encoding: gzip'
```
cc @lhoestq
## Steps to reproduce the bug
```python
from datasets import load_... | 46 | `Can not decode content-encoding: gzip` when loading `scitldr` dataset with streaming
## Describe the bug
Trying to load the `"FullText"` config of the `"scitldr"` dataset with `streaming=True` raises an error from `aiohttp`:
```python
ClientPayloadError: 400, message='Can not decode content-encoding: gzip'
```... |
https://github.com/huggingface/datasets/issues/2917 | windows download abnormal | Hi ! Is there some kind of proxy that is configured in your browser that gives you access to internet ? If it's the case it could explain why it doesn't work in the code, since the proxy wouldn't be used | ## Describe the bug
The script clearly exists (accessible from the browser), but the script download fails on windows. Then I tried it again and it can be downloaded normally on linux. why??
## Steps to reproduce the bug
```python3.7 + windows
, but the script download fails on windows. Then I tried it again and it can be downloaded normally on linux. why??
## Steps to reproduce the bug
```python3.7 + windows
 | ## Describe the bug
The dataset 'timit_asr' only includes one text phrase. It only includes the transcription "Would such an act of refusal be useful?" multiple times rather than different phrases.
## Steps to reproduce the bug
Note: I am following the tutorial https://huggingface.co/blog/fine-tune-wav2vec2-englis... | 16 | timit_asr dataset only includes one text phrase
## Describe the bug
The dataset 'timit_asr' only includes one text phrase. It only includes the transcription "Would such an act of refusal be useful?" multiple times rather than different phrases.
## Steps to reproduce the bug
Note: I am following the tutorial htt... |
https://github.com/huggingface/datasets/issues/2913 | timit_asr dataset only includes one text phrase | Hi @margotwagner,
Yes, as @bhavitvyamalik has commented, this bug was fixed in `datasets` version 1.5.0. You need to update it, as your current version is 1.4.1:
> Environment info
> - `datasets` version: 1.4.1 | ## Describe the bug
The dataset 'timit_asr' only includes one text phrase. It only includes the transcription "Would such an act of refusal be useful?" multiple times rather than different phrases.
## Steps to reproduce the bug
Note: I am following the tutorial https://huggingface.co/blog/fine-tune-wav2vec2-englis... | 34 | timit_asr dataset only includes one text phrase
## Describe the bug
The dataset 'timit_asr' only includes one text phrase. It only includes the transcription "Would such an act of refusal be useful?" multiple times rather than different phrases.
## Steps to reproduce the bug
Note: I am following the tutorial htt... |
https://github.com/huggingface/datasets/issues/2904 | FORCE_REDOWNLOAD does not work | Hi ! Thanks for reporting. The error seems to happen only if you use compressed files.
The second dataset is prepared in another dataset cache directory than the first - which is normal, since the source file is different. However, it doesn't uncompress the new data file because it finds the old uncompressed data in... | ## Describe the bug
With GenerateMode.FORCE_REDOWNLOAD, the documentation says
+------------------------------------+-----------+---------+
| | Downloads | Dataset |
+====================================+===========+=========+
| `REUSE_DATASET_IF_EXISTS` (default... | 99 | FORCE_REDOWNLOAD does not work
## Describe the bug
With GenerateMode.FORCE_REDOWNLOAD, the documentation says
+------------------------------------+-----------+---------+
| | Downloads | Dataset |
+====================================+===========+=========+
| `... |
https://github.com/huggingface/datasets/issues/2902 | Add WIT Dataset | WikiMedia is now hosting the pixel values directly which should make it a lot easier!
The files can be found here:
https://techblog.wikimedia.org/2021/09/09/the-wikipedia-image-caption-matching-challenge-and-a-huge-release-of-image-data-for-research/
https://analytics.wikimedia.org/published/datasets/one-off/caption... | ## Adding a Dataset
- **Name:** *WIT*
- **Description:** *Wikipedia-based Image Text Dataset*
- **Paper:** *[WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning
](https://arxiv.org/abs/2103.01913)*
- **Data:** *https://github.com/google-research-datasets/wit*
- **Motivation:** (e... | 23 | Add WIT Dataset
## Adding a Dataset
- **Name:** *WIT*
- **Description:** *Wikipedia-based Image Text Dataset*
- **Paper:** *[WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning
](https://arxiv.org/abs/2103.01913)*
- **Data:** *https://github.com/google-research-datasets/wit*
- *... |
https://github.com/huggingface/datasets/issues/2902 | Add WIT Dataset | > @hassiahk is working on it #2810
Thank you @bhavitvyamalik! Added this issue so we could track progress 😄 . Just linked the PR as well for visibility. | ## Adding a Dataset
- **Name:** *WIT*
- **Description:** *Wikipedia-based Image Text Dataset*
- **Paper:** *[WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning
](https://arxiv.org/abs/2103.01913)*
- **Data:** *https://github.com/google-research-datasets/wit*
- **Motivation:** (e... | 28 | Add WIT Dataset
## Adding a Dataset
- **Name:** *WIT*
- **Description:** *Wikipedia-based Image Text Dataset*
- **Paper:** *[WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning
](https://arxiv.org/abs/2103.01913)*
- **Data:** *https://github.com/google-research-datasets/wit*
- *... |
https://github.com/huggingface/datasets/issues/2901 | Incompatibility with pytest | Sorry, my bad... When implementing `xpathopen`, I just considered the use case in the COUNTER dataset... I'm fixing it! | ## Describe the bug
pytest complains about xpathopen / path.open("w")
## Steps to reproduce the bug
Create a test file, `test.py`:
```python
import datasets as ds
def load_dataset():
ds.load_dataset("counter", split="train", streaming=True)
```
And launch it with pytest:
```bash
python -m pyt... | 19 | Incompatibility with pytest
## Describe the bug
pytest complains about xpathopen / path.open("w")
## Steps to reproduce the bug
Create a test file, `test.py`:
```python
import datasets as ds
def load_dataset():
ds.load_dataset("counter", split="train", streaming=True)
```
And launch it with pyt... |
https://github.com/huggingface/datasets/issues/2892 | Error when encoding a dataset with None objects with a Sequence feature | This has been fixed by https://github.com/huggingface/datasets/pull/2900
We're doing a new release 1.12 today to make the fix available :) | There is an error when encoding a dataset with None objects with a Sequence feature
To reproduce:
```python
from datasets import Dataset, Features, Value, Sequence
data = {"a": [[0], None]}
features = Features({"a": Sequence(Value("int32"))})
dataset = Dataset.from_dict(data, features=features)
```
raises
... | 19 | Error when encoding a dataset with None objects with a Sequence feature
There is an error when encoding a dataset with None objects with a Sequence feature
To reproduce:
```python
from datasets import Dataset, Features, Value, Sequence
data = {"a": [[0], None]}
features = Features({"a": Sequence(Value("int32")... |
https://github.com/huggingface/datasets/issues/2888 | v1.11.1 release date | @albertvillanova i think this issue is still valid and should not be closed till `>1.11.0` is published :) | Hello, i need to use latest features in one of my packages but there have been no new datasets release since 2 months ago.
When do you plan to publush v1.11.1 release? | 18 | v1.11.1 release date
Hello, i need to use latest features in one of my packages but there have been no new datasets release since 2 months ago.
When do you plan to publush v1.11.1 release?
@albertvillanova i think this issue is still valid and should not be closed till `>1.11.0` is published :) |
https://github.com/huggingface/datasets/issues/2885 | Adding an Elastic Search index to a Dataset | Hi, is this bug deterministic in your poetry env ? I mean, does it always stop at 90% or is it random ?
Also, can you try using another version of Elasticsearch ? Maybe there's an issue with the one of you poetry env | ## Describe the bug
When trying to index documents from the squad dataset, the connection to ElasticSearch seems to break:
Reusing dataset squad (/Users/andreasmotz/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453)
90%|████████████████████████████... | 44 | Adding an Elastic Search index to a Dataset
## Describe the bug
When trying to index documents from the squad dataset, the connection to ElasticSearch seems to break:
Reusing dataset squad (/Users/andreasmotz/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837e... |
https://github.com/huggingface/datasets/issues/2882 | `load_dataset('docred')` results in a `NonMatchingChecksumError` | Hi @tmpr, thanks for reporting.
Two weeks ago (23th Aug), the host of the source `docred` dataset updated one of the files (`dev.json`): you can see it [here](https://drive.google.com/drive/folders/1c5-0YwnoJx8NS6CV2f-NoTHR__BdkNqw).
Therefore, the checksum needs to be updated.
Normally, in the meantime, you c... | ## Describe the bug
I get consistent `NonMatchingChecksumError: Checksums didn't match for dataset source files` errors when trying to execute `datasets.load_dataset('docred')`.
## Steps to reproduce the bug
It is quasi only this code:
```python
import datasets
data = datasets.load_dataset('docred')
```
## ... | 69 | `load_dataset('docred')` results in a `NonMatchingChecksumError`
## Describe the bug
I get consistent `NonMatchingChecksumError: Checksums didn't match for dataset source files` errors when trying to execute `datasets.load_dataset('docred')`.
## Steps to reproduce the bug
It is quasi only this code:
```python
... |
https://github.com/huggingface/datasets/issues/2879 | In v1.4.1, all TIMIT train transcripts are "Would such an act of refusal be useful?" | Hi @rcgale, thanks for reporting.
Please note that this bug was fixed on `datasets` version 1.5.0: https://github.com/huggingface/datasets/commit/a23c73e526e1c30263834164f16f1fdf76722c8c#diff-f12a7a42d4673bb6c2ca5a40c92c29eb4fe3475908c84fd4ce4fad5dc2514878
If you update `datasets` version, that should work.
On... | ## Describe the bug
Using version 1.4.1 of `datasets`, TIMIT transcripts are all the same.
## Steps to reproduce the bug
I was following this tutorial
- https://huggingface.co/blog/fine-tune-wav2vec2-english
But here's a distilled repro:
```python
!pip install datasets==1.4.1
from datasets import load_datas... | 46 | In v1.4.1, all TIMIT train transcripts are "Would such an act of refusal be useful?"
## Describe the bug
Using version 1.4.1 of `datasets`, TIMIT transcripts are all the same.
## Steps to reproduce the bug
I was following this tutorial
- https://huggingface.co/blog/fine-tune-wav2vec2-english
But here's a dis... |
https://github.com/huggingface/datasets/issues/2879 | In v1.4.1, all TIMIT train transcripts are "Would such an act of refusal be useful?" | I just proposed a change in the blog post.
I had assumed there was a data format change that broke a previous version of the code, since presumably @patrickvonplaten tested the tutorial with the version they explicitly referenced. But that fix you linked suggests a problem in the code, which surprised me.
I still... | ## Describe the bug
Using version 1.4.1 of `datasets`, TIMIT transcripts are all the same.
## Steps to reproduce the bug
I was following this tutorial
- https://huggingface.co/blog/fine-tune-wav2vec2-english
But here's a distilled repro:
```python
!pip install datasets==1.4.1
from datasets import load_datas... | 134 | In v1.4.1, all TIMIT train transcripts are "Would such an act of refusal be useful?"
## Describe the bug
Using version 1.4.1 of `datasets`, TIMIT transcripts are all the same.
## Steps to reproduce the bug
I was following this tutorial
- https://huggingface.co/blog/fine-tune-wav2vec2-english
But here's a dis... |
https://github.com/huggingface/datasets/issues/2871 | datasets.config.PYARROW_VERSION has no attribute 'major' | Hi @bwang482,
I'm sorry but I'm not able to reproduce your bug.
Please note that in our current master branch, we made a commit (d03223d4d64b89e76b48b00602aba5aa2f817f1e) that simultaneously modified:
- test_dataset_common.py: https://github.com/huggingface/datasets/commit/d03223d4d64b89e76b48b00602aba5aa2f817f1... | In the test_dataset_common.py script, line 288-289
```
if datasets.config.PYARROW_VERSION.major < 3:
packaged_datasets = [pd for pd in packaged_datasets if pd["dataset_name"] != "parquet"]
```
which throws the error below. `datasets.config.PYARROW_VERSION` itself return the string '4.0.1'. I have tested thi... | 47 | datasets.config.PYARROW_VERSION has no attribute 'major'
In the test_dataset_common.py script, line 288-289
```
if datasets.config.PYARROW_VERSION.major < 3:
packaged_datasets = [pd for pd in packaged_datasets if pd["dataset_name"] != "parquet"]
```
which throws the error below. `datasets.config.PYARROW_V... |
https://github.com/huggingface/datasets/issues/2871 | datasets.config.PYARROW_VERSION has no attribute 'major' | Reopening this. Although the `test_dataset_common.py` script works fine now.
Has this got something to do with my pull request not passing `ci/circleci: run_dataset_script_tests_pyarrow` tests?
https://github.com/huggingface/datasets/pull/2873 | In the test_dataset_common.py script, line 288-289
```
if datasets.config.PYARROW_VERSION.major < 3:
packaged_datasets = [pd for pd in packaged_datasets if pd["dataset_name"] != "parquet"]
```
which throws the error below. `datasets.config.PYARROW_VERSION` itself return the string '4.0.1'. I have tested thi... | 25 | datasets.config.PYARROW_VERSION has no attribute 'major'
In the test_dataset_common.py script, line 288-289
```
if datasets.config.PYARROW_VERSION.major < 3:
packaged_datasets = [pd for pd in packaged_datasets if pd["dataset_name"] != "parquet"]
```
which throws the error below. `datasets.config.PYARROW_V... |
https://github.com/huggingface/datasets/issues/2871 | datasets.config.PYARROW_VERSION has no attribute 'major' | Hi @bwang482,
If you click on `Details` (on the right of your non passing CI test names: `ci/circleci: run_dataset_script_tests_pyarrow`), you can have more information about the non-passing tests.
For example, for ["ci/circleci: run_dataset_script_tests_pyarrow_1" details](https://circleci.com/gh/huggingface/dat... | In the test_dataset_common.py script, line 288-289
```
if datasets.config.PYARROW_VERSION.major < 3:
packaged_datasets = [pd for pd in packaged_datasets if pd["dataset_name"] != "parquet"]
```
which throws the error below. `datasets.config.PYARROW_VERSION` itself return the string '4.0.1'. I have tested thi... | 95 | datasets.config.PYARROW_VERSION has no attribute 'major'
In the test_dataset_common.py script, line 288-289
```
if datasets.config.PYARROW_VERSION.major < 3:
packaged_datasets = [pd for pd in packaged_datasets if pd["dataset_name"] != "parquet"]
```
which throws the error below. `datasets.config.PYARROW_V... |
https://github.com/huggingface/datasets/issues/2869 | TypeError: 'NoneType' object is not callable | Hi, @Chenfei-Kang.
I'm sorry, but I'm not able to reproduce your bug:
```python
from datasets import load_dataset
ds = load_dataset("glue", 'cola')
ds
```
```
DatasetDict({
train: Dataset({
features: ['sentence', 'label', 'idx'],
num_rows: 8551
})
validation: Dataset({
... | ## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Speci... | 66 | TypeError: 'NoneType' object is not callable
## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of th... |
https://github.com/huggingface/datasets/issues/2869 | TypeError: 'NoneType' object is not callable | > Hi, @Chenfei-Kang.
>
> I'm sorry, but I'm not able to reproduce your bug:
>
> ```python
> from datasets import load_dataset
>
> ds = load_dataset("glue", 'cola')
> ds
> ```
>
> ```
> DatasetDict({
> train: Dataset({
> features: ['sentence', 'label', 'idx'],
> num_rows: 8551
> ... | ## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Speci... | 116 | TypeError: 'NoneType' object is not callable
## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of th... |
https://github.com/huggingface/datasets/issues/2869 | TypeError: 'NoneType' object is not callable | - For the platform, we need to know the operating system of your machine. Could you please run the command `datasets-cli env` and copy-and-paste its output below?
- In relation with the error, you just gave us the error type and message (`TypeError: 'NoneType' object is not callable`). Could you please copy-paste the ... | ## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Speci... | 69 | TypeError: 'NoneType' object is not callable
## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of th... |
https://github.com/huggingface/datasets/issues/2869 | TypeError: 'NoneType' object is not callable | > * For the platform, we need to know the operating system of your machine. Could you please run the command `datasets-cli env` and copy-and-paste its output below?
> * In relation with the error, you just gave us the error type and message (`TypeError: 'NoneType' object is not callable`). Could you please copy-paste ... | ## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Speci... | 154 | TypeError: 'NoneType' object is not callable
## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of th... |
https://github.com/huggingface/datasets/issues/2869 | TypeError: 'NoneType' object is not callable | For that environment, I am sorry but I can't reproduce the bug: I can load the dataset without any problem. | ## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Speci... | 20 | TypeError: 'NoneType' object is not callable
## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of th... |
https://github.com/huggingface/datasets/issues/2869 | TypeError: 'NoneType' object is not callable | One naive question: do you have internet access from the machine where you execute the code? | ## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Speci... | 16 | TypeError: 'NoneType' object is not callable
## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of th... |
https://github.com/huggingface/datasets/issues/2869 | TypeError: 'NoneType' object is not callable | > For that environment, I am sorry but I can't reproduce the bug: I can load the dataset without any problem.
But I can download other task dataset such as `dataset = load_dataset('squad')`. I don't know what went wrong. Thank you so much! | ## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Speci... | 43 | TypeError: 'NoneType' object is not callable
## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of th... |
https://github.com/huggingface/datasets/issues/2866 | "counter" dataset raises an error in normal mode, but not in streaming mode | Hi @severo, thanks for reporting.
Just note that currently not all canonical datasets support streaming mode: this is one case!
All datasets that use `pathlib` joins (using `/`) instead of `os.path.join` (as in this dataset) do not support streaming mode yet. | ## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter', split="train", streaming=False)
Using custom data configuration default
Dow... | 41 | "counter" dataset raises an error in normal mode, but not in streaming mode
## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter',... |
https://github.com/huggingface/datasets/issues/2866 | "counter" dataset raises an error in normal mode, but not in streaming mode | OK. Do you think it's possible to detect this, and raise an exception (maybe `NotImplementedError`, or a specific `StreamingError`)? | ## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter', split="train", streaming=False)
Using custom data configuration default
Dow... | 19 | "counter" dataset raises an error in normal mode, but not in streaming mode
## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter',... |
https://github.com/huggingface/datasets/issues/2866 | "counter" dataset raises an error in normal mode, but not in streaming mode | We should definitely support datasets using `pathlib` in streaming mode...
For non-supported datasets in streaming mode, we have already a request of raising an error/warning: see #2654. | ## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter', split="train", streaming=False)
Using custom data configuration default
Dow... | 27 | "counter" dataset raises an error in normal mode, but not in streaming mode
## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter',... |
https://github.com/huggingface/datasets/issues/2866 | "counter" dataset raises an error in normal mode, but not in streaming mode | Hi @severo, please note that "counter" dataset will be streamable (at least until it arrives at the missing file, error already in normal mode) once these PRs are merged:
- #2874
- #2876
- #2880
I have tested it. 😉 | ## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter', split="train", streaming=False)
Using custom data configuration default
Dow... | 40 | "counter" dataset raises an error in normal mode, but not in streaming mode
## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter',... |
https://github.com/huggingface/datasets/issues/2866 | "counter" dataset raises an error in normal mode, but not in streaming mode | Now (on master), we get:
```
import datasets as ds
ds.load_dataset('counter', split="train", streaming=False)
```
```
Using custom data configuration default
Downloading and preparing dataset counter/default (download: 1.29 MiB, generated: 2.48 MiB, post-processed: Unknown size, total: 3.77 MiB) to /home/sle... | ## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter', split="train", streaming=False)
Using custom data configuration default
Dow... | 191 | "counter" dataset raises an error in normal mode, but not in streaming mode
## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter',... |
https://github.com/huggingface/datasets/issues/2866 | "counter" dataset raises an error in normal mode, but not in streaming mode | Note that we might want to open an issue to fix the "counter" dataset by itself, but I let it up to you. | ## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter', split="train", streaming=False)
Using custom data configuration default
Dow... | 23 | "counter" dataset raises an error in normal mode, but not in streaming mode
## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter',... |
https://github.com/huggingface/datasets/issues/2860 | Cannot download TOTTO dataset | Hola @mrm8488, thanks for reporting.
Apparently, the data source host changed their URL one week ago: https://github.com/google-research-datasets/ToTTo/commit/cebeb430ec2a97747e704d16a9354f7d9073ff8f
I'm fixing it. | Error: Couldn't find file at https://storage.googleapis.com/totto/totto_data.zip
`datasets version: 1.11.0`
# How to reproduce:
```py
from datasets import load_dataset
dataset = load_dataset('totto')
```
| 20 | Cannot download TOTTO dataset
Error: Couldn't find file at https://storage.googleapis.com/totto/totto_data.zip
`datasets version: 1.11.0`
# How to reproduce:
```py
from datasets import load_dataset
dataset = load_dataset('totto')
```
Hola @mrm8488, thanks for reporting.
Apparently, the data sourc... |
https://github.com/huggingface/datasets/issues/2945 | Protect master branch | @lhoestq now the 2 are implemented.
Please note that for the the second protection, finally I have chosen to protect the master branch only from **merge commits** (see update comment above), so no need to disable/re-enable the protection on each release (direct commits, different from merge commits, can be pushed to... | After accidental merge commit (91c55355b634d0dc73350a7ddee1a6776dbbdd69) into `datasets` master branch, all commits present in the feature branch were permanently added to `datasets` master branch history, as e.g.:
- 00cc036fea7c7745cfe722360036ed306796a3f2
- 13ae8c98602bbad8197de3b9b425f4c78f582af1
- ...
I propo... | 64 | Protect master branch
After accidental merge commit (91c55355b634d0dc73350a7ddee1a6776dbbdd69) into `datasets` master branch, all commits present in the feature branch were permanently added to `datasets` master branch history, as e.g.:
- 00cc036fea7c7745cfe722360036ed306796a3f2
- 13ae8c98602bbad8197de3b9b425f4c78f... |
https://github.com/huggingface/datasets/issues/2943 | Backwards compatibility broken for cached datasets that use `.filter()` | Hi ! I guess the caching mechanism should have considered the new `filter` to be different from the old one, and don't use cached results from the old `filter`.
To avoid other users from having this issue we could make the caching differentiate the two, what do you think ? | ## Describe the bug
After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with
`ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='in... | 50 | Backwards compatibility broken for cached datasets that use `.filter()`
## Describe the bug
After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with
`ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=No... |
https://github.com/huggingface/datasets/issues/2943 | Backwards compatibility broken for cached datasets that use `.filter()` | If it's easy enough to implement, then yes please 😄 But this issue can be low-priority, since I've only encountered it in a couple of `transformers` CI tests. | ## Describe the bug
After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with
`ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='in... | 28 | Backwards compatibility broken for cached datasets that use `.filter()`
## Describe the bug
After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with
`ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=No... |
https://github.com/huggingface/datasets/issues/2943 | Backwards compatibility broken for cached datasets that use `.filter()` | Well it can cause issue with anyone that updates `datasets` and re-run some code that uses filter, so I'm creating a PR | ## Describe the bug
After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with
`ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='in... | 22 | Backwards compatibility broken for cached datasets that use `.filter()`
## Describe the bug
After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with
`ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=No... |
https://github.com/huggingface/datasets/issues/2943 | Backwards compatibility broken for cached datasets that use `.filter()` | I just merged a fix, let me know if you're still having this kind of issues :)
We'll do a release soon to make this fix available | ## Describe the bug
After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with
`ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='in... | 27 | Backwards compatibility broken for cached datasets that use `.filter()`
## Describe the bug
After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with
`ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=No... |
https://github.com/huggingface/datasets/issues/2937 | load_dataset using default cache on Windows causes PermissionError: [WinError 5] Access is denied | Hi @daqieq, thanks for reporting.
Unfortunately, I was not able to reproduce this bug:
```ipython
In [1]: from datasets import load_dataset
...: ds = load_dataset('wiki_bio')
Downloading: 7.58kB [00:00, 26.3kB/s]
Downloading: 2.71kB [00:00, ?B/s]
Using custom data configuration default
Downloading and prep... | ## Describe the bug
Standard process to download and load the wiki_bio dataset causes PermissionError in Windows 10 and 11.
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset('wiki_bio')
```
## Expected results
It is expected that the dataset downloads without any er... | 109 | load_dataset using default cache on Windows causes PermissionError: [WinError 5] Access is denied
## Describe the bug
Standard process to download and load the wiki_bio dataset causes PermissionError in Windows 10 and 11.
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_datas... |
https://github.com/huggingface/datasets/issues/2937 | load_dataset using default cache on Windows causes PermissionError: [WinError 5] Access is denied | Thanks @albertvillanova for looking at it! I tried on my personal Windows machine and it downloaded just fine.
Running on my work machine and on a colleague's machine it is consistently hitting this error. It's not a write access issue because the `.incomplete` directory is written just fine. It just won't rename an... | ## Describe the bug
Standard process to download and load the wiki_bio dataset causes PermissionError in Windows 10 and 11.
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset('wiki_bio')
```
## Expected results
It is expected that the dataset downloads without any er... | 194 | load_dataset using default cache on Windows causes PermissionError: [WinError 5] Access is denied
## Describe the bug
Standard process to download and load the wiki_bio dataset causes PermissionError in Windows 10 and 11.
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_datas... |
https://github.com/huggingface/datasets/issues/2934 | to_tf_dataset keeps a reference to the open data somewhere, causing issues on windows | I did some investigation and, as it seems, the bug stems from [this line](https://github.com/huggingface/datasets/blob/8004d7c3e1d74b29c3e5b0d1660331cd26758363/src/datasets/arrow_dataset.py#L325). The lifecycle of the dataset from the linked line is bound to one of the returned `tf.data.Dataset`. So my (hacky) solution... | To reproduce:
```python
import datasets as ds
import weakref
import gc
d = ds.load_dataset("mnist", split="train")
ref = weakref.ref(d._data.table)
tfd = d.to_tf_dataset("image", batch_size=1, shuffle=False, label_cols="label")
del tfd, d
gc.collect()
assert ref() is None, "Error: there is at least one refe... | 99 | to_tf_dataset keeps a reference to the open data somewhere, causing issues on windows
To reproduce:
```python
import datasets as ds
import weakref
import gc
d = ds.load_dataset("mnist", split="train")
ref = weakref.ref(d._data.table)
tfd = d.to_tf_dataset("image", batch_size=1, shuffle=False, label_cols="lab... |
https://github.com/huggingface/datasets/issues/2924 | "File name too long" error for file locks | Hi, the filename here is less than 255
```python
>>> len("_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock")
154
```
so not sure why it's considered too long for your filesystem.
(also note that the lock file... | ## Describe the bug
Getting the following error when calling `load_dataset("gar1t/test")`:
```
OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.inc... | 39 | "File name too long" error for file locks
## Describe the bug
Getting the following error when calling `load_dataset("gar1t/test")`:
```
OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53... |
https://github.com/huggingface/datasets/issues/2924 | "File name too long" error for file locks | Yes, you're right! I need to get you more info here. Either there's something going with the name itself that the file system doesn't like (an encoding that blows up the name length??) or perhaps there's something with the path that's causing the entire string to be used as a name. I haven't seen this on any system be... | ## Describe the bug
Getting the following error when calling `load_dataset("gar1t/test")`:
```
OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.inc... | 67 | "File name too long" error for file locks
## Describe the bug
Getting the following error when calling `load_dataset("gar1t/test")`:
```
OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53... |
https://github.com/huggingface/datasets/issues/2918 | `Can not decode content-encoding: gzip` when loading `scitldr` dataset with streaming | Hi @SBrandeis, thanks for reporting! ^^
I think this is an issue with `fsspec`: https://github.com/intake/filesystem_spec/issues/389
I will ask them if they are planning to fix it... | ## Describe the bug
Trying to load the `"FullText"` config of the `"scitldr"` dataset with `streaming=True` raises an error from `aiohttp`:
```python
ClientPayloadError: 400, message='Can not decode content-encoding: gzip'
```
cc @lhoestq
## Steps to reproduce the bug
```python
from datasets import load_... | 26 | `Can not decode content-encoding: gzip` when loading `scitldr` dataset with streaming
## Describe the bug
Trying to load the `"FullText"` config of the `"scitldr"` dataset with `streaming=True` raises an error from `aiohttp`:
```python
ClientPayloadError: 400, message='Can not decode content-encoding: gzip'
```... |
https://github.com/huggingface/datasets/issues/2918 | `Can not decode content-encoding: gzip` when loading `scitldr` dataset with streaming | Code to reproduce the bug: `ClientPayloadError: 400, message='Can not decode content-encoding: gzip'`
```python
In [1]: import fsspec
In [2]: import json
In [3]: with fsspec.open('https://raw.githubusercontent.com/allenai/scitldr/master/SciTLDR-Data/SciTLDR-FullText/test.jsonl', encoding="utf-8") as f:
...:... | ## Describe the bug
Trying to load the `"FullText"` config of the `"scitldr"` dataset with `streaming=True` raises an error from `aiohttp`:
```python
ClientPayloadError: 400, message='Can not decode content-encoding: gzip'
```
cc @lhoestq
## Steps to reproduce the bug
```python
from datasets import load_... | 46 | `Can not decode content-encoding: gzip` when loading `scitldr` dataset with streaming
## Describe the bug
Trying to load the `"FullText"` config of the `"scitldr"` dataset with `streaming=True` raises an error from `aiohttp`:
```python
ClientPayloadError: 400, message='Can not decode content-encoding: gzip'
```... |
https://github.com/huggingface/datasets/issues/2917 | windows download abnormal | Hi ! Is there some kind of proxy that is configured in your browser that gives you access to internet ? If it's the case it could explain why it doesn't work in the code, since the proxy wouldn't be used | ## Describe the bug
The script clearly exists (accessible from the browser), but the script download fails on windows. Then I tried it again and it can be downloaded normally on linux. why??
## Steps to reproduce the bug
```python3.7 + windows
, but the script download fails on windows. Then I tried it again and it can be downloaded normally on linux. why??
## Steps to reproduce the bug
```python3.7 + windows
 | ## Describe the bug
The dataset 'timit_asr' only includes one text phrase. It only includes the transcription "Would such an act of refusal be useful?" multiple times rather than different phrases.
## Steps to reproduce the bug
Note: I am following the tutorial https://huggingface.co/blog/fine-tune-wav2vec2-englis... | 16 | timit_asr dataset only includes one text phrase
## Describe the bug
The dataset 'timit_asr' only includes one text phrase. It only includes the transcription "Would such an act of refusal be useful?" multiple times rather than different phrases.
## Steps to reproduce the bug
Note: I am following the tutorial htt... |
https://github.com/huggingface/datasets/issues/2913 | timit_asr dataset only includes one text phrase | Hi @margotwagner,
Yes, as @bhavitvyamalik has commented, this bug was fixed in `datasets` version 1.5.0. You need to update it, as your current version is 1.4.1:
> Environment info
> - `datasets` version: 1.4.1 | ## Describe the bug
The dataset 'timit_asr' only includes one text phrase. It only includes the transcription "Would such an act of refusal be useful?" multiple times rather than different phrases.
## Steps to reproduce the bug
Note: I am following the tutorial https://huggingface.co/blog/fine-tune-wav2vec2-englis... | 34 | timit_asr dataset only includes one text phrase
## Describe the bug
The dataset 'timit_asr' only includes one text phrase. It only includes the transcription "Would such an act of refusal be useful?" multiple times rather than different phrases.
## Steps to reproduce the bug
Note: I am following the tutorial htt... |
https://github.com/huggingface/datasets/issues/2904 | FORCE_REDOWNLOAD does not work | Hi ! Thanks for reporting. The error seems to happen only if you use compressed files.
The second dataset is prepared in another dataset cache directory than the first - which is normal, since the source file is different. However, it doesn't uncompress the new data file because it finds the old uncompressed data in... | ## Describe the bug
With GenerateMode.FORCE_REDOWNLOAD, the documentation says
+------------------------------------+-----------+---------+
| | Downloads | Dataset |
+====================================+===========+=========+
| `REUSE_DATASET_IF_EXISTS` (default... | 99 | FORCE_REDOWNLOAD does not work
## Describe the bug
With GenerateMode.FORCE_REDOWNLOAD, the documentation says
+------------------------------------+-----------+---------+
| | Downloads | Dataset |
+====================================+===========+=========+
| `... |
https://github.com/huggingface/datasets/issues/2902 | Add WIT Dataset | WikiMedia is now hosting the pixel values directly which should make it a lot easier!
The files can be found here:
https://techblog.wikimedia.org/2021/09/09/the-wikipedia-image-caption-matching-challenge-and-a-huge-release-of-image-data-for-research/
https://analytics.wikimedia.org/published/datasets/one-off/caption... | ## Adding a Dataset
- **Name:** *WIT*
- **Description:** *Wikipedia-based Image Text Dataset*
- **Paper:** *[WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning
](https://arxiv.org/abs/2103.01913)*
- **Data:** *https://github.com/google-research-datasets/wit*
- **Motivation:** (e... | 23 | Add WIT Dataset
## Adding a Dataset
- **Name:** *WIT*
- **Description:** *Wikipedia-based Image Text Dataset*
- **Paper:** *[WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning
](https://arxiv.org/abs/2103.01913)*
- **Data:** *https://github.com/google-research-datasets/wit*
- *... |
https://github.com/huggingface/datasets/issues/2902 | Add WIT Dataset | > @hassiahk is working on it #2810
Thank you @bhavitvyamalik! Added this issue so we could track progress 😄 . Just linked the PR as well for visibility. | ## Adding a Dataset
- **Name:** *WIT*
- **Description:** *Wikipedia-based Image Text Dataset*
- **Paper:** *[WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning
](https://arxiv.org/abs/2103.01913)*
- **Data:** *https://github.com/google-research-datasets/wit*
- **Motivation:** (e... | 28 | Add WIT Dataset
## Adding a Dataset
- **Name:** *WIT*
- **Description:** *Wikipedia-based Image Text Dataset*
- **Paper:** *[WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning
](https://arxiv.org/abs/2103.01913)*
- **Data:** *https://github.com/google-research-datasets/wit*
- *... |
https://github.com/huggingface/datasets/issues/2901 | Incompatibility with pytest | Sorry, my bad... When implementing `xpathopen`, I just considered the use case in the COUNTER dataset... I'm fixing it! | ## Describe the bug
pytest complains about xpathopen / path.open("w")
## Steps to reproduce the bug
Create a test file, `test.py`:
```python
import datasets as ds
def load_dataset():
ds.load_dataset("counter", split="train", streaming=True)
```
And launch it with pytest:
```bash
python -m pyt... | 19 | Incompatibility with pytest
## Describe the bug
pytest complains about xpathopen / path.open("w")
## Steps to reproduce the bug
Create a test file, `test.py`:
```python
import datasets as ds
def load_dataset():
ds.load_dataset("counter", split="train", streaming=True)
```
And launch it with pyt... |
https://github.com/huggingface/datasets/issues/2892 | Error when encoding a dataset with None objects with a Sequence feature | This has been fixed by https://github.com/huggingface/datasets/pull/2900
We're doing a new release 1.12 today to make the fix available :) | There is an error when encoding a dataset with None objects with a Sequence feature
To reproduce:
```python
from datasets import Dataset, Features, Value, Sequence
data = {"a": [[0], None]}
features = Features({"a": Sequence(Value("int32"))})
dataset = Dataset.from_dict(data, features=features)
```
raises
... | 19 | Error when encoding a dataset with None objects with a Sequence feature
There is an error when encoding a dataset with None objects with a Sequence feature
To reproduce:
```python
from datasets import Dataset, Features, Value, Sequence
data = {"a": [[0], None]}
features = Features({"a": Sequence(Value("int32")... |
https://github.com/huggingface/datasets/issues/2888 | v1.11.1 release date | @albertvillanova i think this issue is still valid and should not be closed till `>1.11.0` is published :) | Hello, i need to use latest features in one of my packages but there have been no new datasets release since 2 months ago.
When do you plan to publush v1.11.1 release? | 18 | v1.11.1 release date
Hello, i need to use latest features in one of my packages but there have been no new datasets release since 2 months ago.
When do you plan to publush v1.11.1 release?
@albertvillanova i think this issue is still valid and should not be closed till `>1.11.0` is published :) |
https://github.com/huggingface/datasets/issues/2885 | Adding an Elastic Search index to a Dataset | Hi, is this bug deterministic in your poetry env ? I mean, does it always stop at 90% or is it random ?
Also, can you try using another version of Elasticsearch ? Maybe there's an issue with the one of you poetry env | ## Describe the bug
When trying to index documents from the squad dataset, the connection to ElasticSearch seems to break:
Reusing dataset squad (/Users/andreasmotz/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453)
90%|████████████████████████████... | 44 | Adding an Elastic Search index to a Dataset
## Describe the bug
When trying to index documents from the squad dataset, the connection to ElasticSearch seems to break:
Reusing dataset squad (/Users/andreasmotz/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837e... |
https://github.com/huggingface/datasets/issues/2882 | `load_dataset('docred')` results in a `NonMatchingChecksumError` | Hi @tmpr, thanks for reporting.
Two weeks ago (23th Aug), the host of the source `docred` dataset updated one of the files (`dev.json`): you can see it [here](https://drive.google.com/drive/folders/1c5-0YwnoJx8NS6CV2f-NoTHR__BdkNqw).
Therefore, the checksum needs to be updated.
Normally, in the meantime, you c... | ## Describe the bug
I get consistent `NonMatchingChecksumError: Checksums didn't match for dataset source files` errors when trying to execute `datasets.load_dataset('docred')`.
## Steps to reproduce the bug
It is quasi only this code:
```python
import datasets
data = datasets.load_dataset('docred')
```
## ... | 69 | `load_dataset('docred')` results in a `NonMatchingChecksumError`
## Describe the bug
I get consistent `NonMatchingChecksumError: Checksums didn't match for dataset source files` errors when trying to execute `datasets.load_dataset('docred')`.
## Steps to reproduce the bug
It is quasi only this code:
```python
... |
https://github.com/huggingface/datasets/issues/2879 | In v1.4.1, all TIMIT train transcripts are "Would such an act of refusal be useful?" | Hi @rcgale, thanks for reporting.
Please note that this bug was fixed on `datasets` version 1.5.0: https://github.com/huggingface/datasets/commit/a23c73e526e1c30263834164f16f1fdf76722c8c#diff-f12a7a42d4673bb6c2ca5a40c92c29eb4fe3475908c84fd4ce4fad5dc2514878
If you update `datasets` version, that should work.
On... | ## Describe the bug
Using version 1.4.1 of `datasets`, TIMIT transcripts are all the same.
## Steps to reproduce the bug
I was following this tutorial
- https://huggingface.co/blog/fine-tune-wav2vec2-english
But here's a distilled repro:
```python
!pip install datasets==1.4.1
from datasets import load_datas... | 46 | In v1.4.1, all TIMIT train transcripts are "Would such an act of refusal be useful?"
## Describe the bug
Using version 1.4.1 of `datasets`, TIMIT transcripts are all the same.
## Steps to reproduce the bug
I was following this tutorial
- https://huggingface.co/blog/fine-tune-wav2vec2-english
But here's a dis... |
https://github.com/huggingface/datasets/issues/2879 | In v1.4.1, all TIMIT train transcripts are "Would such an act of refusal be useful?" | I just proposed a change in the blog post.
I had assumed there was a data format change that broke a previous version of the code, since presumably @patrickvonplaten tested the tutorial with the version they explicitly referenced. But that fix you linked suggests a problem in the code, which surprised me.
I still... | ## Describe the bug
Using version 1.4.1 of `datasets`, TIMIT transcripts are all the same.
## Steps to reproduce the bug
I was following this tutorial
- https://huggingface.co/blog/fine-tune-wav2vec2-english
But here's a distilled repro:
```python
!pip install datasets==1.4.1
from datasets import load_datas... | 134 | In v1.4.1, all TIMIT train transcripts are "Would such an act of refusal be useful?"
## Describe the bug
Using version 1.4.1 of `datasets`, TIMIT transcripts are all the same.
## Steps to reproduce the bug
I was following this tutorial
- https://huggingface.co/blog/fine-tune-wav2vec2-english
But here's a dis... |
https://github.com/huggingface/datasets/issues/2871 | datasets.config.PYARROW_VERSION has no attribute 'major' | Hi @bwang482,
I'm sorry but I'm not able to reproduce your bug.
Please note that in our current master branch, we made a commit (d03223d4d64b89e76b48b00602aba5aa2f817f1e) that simultaneously modified:
- test_dataset_common.py: https://github.com/huggingface/datasets/commit/d03223d4d64b89e76b48b00602aba5aa2f817f1... | In the test_dataset_common.py script, line 288-289
```
if datasets.config.PYARROW_VERSION.major < 3:
packaged_datasets = [pd for pd in packaged_datasets if pd["dataset_name"] != "parquet"]
```
which throws the error below. `datasets.config.PYARROW_VERSION` itself return the string '4.0.1'. I have tested thi... | 47 | datasets.config.PYARROW_VERSION has no attribute 'major'
In the test_dataset_common.py script, line 288-289
```
if datasets.config.PYARROW_VERSION.major < 3:
packaged_datasets = [pd for pd in packaged_datasets if pd["dataset_name"] != "parquet"]
```
which throws the error below. `datasets.config.PYARROW_V... |
https://github.com/huggingface/datasets/issues/2871 | datasets.config.PYARROW_VERSION has no attribute 'major' | Reopening this. Although the `test_dataset_common.py` script works fine now.
Has this got something to do with my pull request not passing `ci/circleci: run_dataset_script_tests_pyarrow` tests?
https://github.com/huggingface/datasets/pull/2873 | In the test_dataset_common.py script, line 288-289
```
if datasets.config.PYARROW_VERSION.major < 3:
packaged_datasets = [pd for pd in packaged_datasets if pd["dataset_name"] != "parquet"]
```
which throws the error below. `datasets.config.PYARROW_VERSION` itself return the string '4.0.1'. I have tested thi... | 25 | datasets.config.PYARROW_VERSION has no attribute 'major'
In the test_dataset_common.py script, line 288-289
```
if datasets.config.PYARROW_VERSION.major < 3:
packaged_datasets = [pd for pd in packaged_datasets if pd["dataset_name"] != "parquet"]
```
which throws the error below. `datasets.config.PYARROW_V... |
https://github.com/huggingface/datasets/issues/2871 | datasets.config.PYARROW_VERSION has no attribute 'major' | Hi @bwang482,
If you click on `Details` (on the right of your non passing CI test names: `ci/circleci: run_dataset_script_tests_pyarrow`), you can have more information about the non-passing tests.
For example, for ["ci/circleci: run_dataset_script_tests_pyarrow_1" details](https://circleci.com/gh/huggingface/dat... | In the test_dataset_common.py script, line 288-289
```
if datasets.config.PYARROW_VERSION.major < 3:
packaged_datasets = [pd for pd in packaged_datasets if pd["dataset_name"] != "parquet"]
```
which throws the error below. `datasets.config.PYARROW_VERSION` itself return the string '4.0.1'. I have tested thi... | 95 | datasets.config.PYARROW_VERSION has no attribute 'major'
In the test_dataset_common.py script, line 288-289
```
if datasets.config.PYARROW_VERSION.major < 3:
packaged_datasets = [pd for pd in packaged_datasets if pd["dataset_name"] != "parquet"]
```
which throws the error below. `datasets.config.PYARROW_V... |
https://github.com/huggingface/datasets/issues/2869 | TypeError: 'NoneType' object is not callable | Hi, @Chenfei-Kang.
I'm sorry, but I'm not able to reproduce your bug:
```python
from datasets import load_dataset
ds = load_dataset("glue", 'cola')
ds
```
```
DatasetDict({
train: Dataset({
features: ['sentence', 'label', 'idx'],
num_rows: 8551
})
validation: Dataset({
... | ## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Speci... | 66 | TypeError: 'NoneType' object is not callable
## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of th... |
https://github.com/huggingface/datasets/issues/2869 | TypeError: 'NoneType' object is not callable | > Hi, @Chenfei-Kang.
>
> I'm sorry, but I'm not able to reproduce your bug:
>
> ```python
> from datasets import load_dataset
>
> ds = load_dataset("glue", 'cola')
> ds
> ```
>
> ```
> DatasetDict({
> train: Dataset({
> features: ['sentence', 'label', 'idx'],
> num_rows: 8551
> ... | ## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Speci... | 116 | TypeError: 'NoneType' object is not callable
## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of th... |
https://github.com/huggingface/datasets/issues/2869 | TypeError: 'NoneType' object is not callable | - For the platform, we need to know the operating system of your machine. Could you please run the command `datasets-cli env` and copy-and-paste its output below?
- In relation with the error, you just gave us the error type and message (`TypeError: 'NoneType' object is not callable`). Could you please copy-paste the ... | ## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Speci... | 69 | TypeError: 'NoneType' object is not callable
## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of th... |
https://github.com/huggingface/datasets/issues/2869 | TypeError: 'NoneType' object is not callable | > * For the platform, we need to know the operating system of your machine. Could you please run the command `datasets-cli env` and copy-and-paste its output below?
> * In relation with the error, you just gave us the error type and message (`TypeError: 'NoneType' object is not callable`). Could you please copy-paste ... | ## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Speci... | 154 | TypeError: 'NoneType' object is not callable
## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of th... |
https://github.com/huggingface/datasets/issues/2869 | TypeError: 'NoneType' object is not callable | For that environment, I am sorry but I can't reproduce the bug: I can load the dataset without any problem. | ## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Speci... | 20 | TypeError: 'NoneType' object is not callable
## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of th... |
https://github.com/huggingface/datasets/issues/2869 | TypeError: 'NoneType' object is not callable | One naive question: do you have internet access from the machine where you execute the code? | ## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Speci... | 16 | TypeError: 'NoneType' object is not callable
## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of th... |
https://github.com/huggingface/datasets/issues/2869 | TypeError: 'NoneType' object is not callable | > For that environment, I am sorry but I can't reproduce the bug: I can load the dataset without any problem.
But I can download other task dataset such as `dataset = load_dataset('squad')`. I don't know what went wrong. Thank you so much! | ## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Speci... | 43 | TypeError: 'NoneType' object is not callable
## Describe the bug
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
dataset = datasets.load_dataset("glue", 'cola')
```
## Expected results
A clear and concise description of th... |
https://github.com/huggingface/datasets/issues/2866 | "counter" dataset raises an error in normal mode, but not in streaming mode | Hi @severo, thanks for reporting.
Just note that currently not all canonical datasets support streaming mode: this is one case!
All datasets that use `pathlib` joins (using `/`) instead of `os.path.join` (as in this dataset) do not support streaming mode yet. | ## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter', split="train", streaming=False)
Using custom data configuration default
Dow... | 41 | "counter" dataset raises an error in normal mode, but not in streaming mode
## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter',... |
https://github.com/huggingface/datasets/issues/2866 | "counter" dataset raises an error in normal mode, but not in streaming mode | OK. Do you think it's possible to detect this, and raise an exception (maybe `NotImplementedError`, or a specific `StreamingError`)? | ## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter', split="train", streaming=False)
Using custom data configuration default
Dow... | 19 | "counter" dataset raises an error in normal mode, but not in streaming mode
## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter',... |
https://github.com/huggingface/datasets/issues/2866 | "counter" dataset raises an error in normal mode, but not in streaming mode | We should definitely support datasets using `pathlib` in streaming mode...
For non-supported datasets in streaming mode, we have already a request of raising an error/warning: see #2654. | ## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter', split="train", streaming=False)
Using custom data configuration default
Dow... | 27 | "counter" dataset raises an error in normal mode, but not in streaming mode
## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter',... |
https://github.com/huggingface/datasets/issues/2866 | "counter" dataset raises an error in normal mode, but not in streaming mode | Hi @severo, please note that "counter" dataset will be streamable (at least until it arrives at the missing file, error already in normal mode) once these PRs are merged:
- #2874
- #2876
- #2880
I have tested it. 😉 | ## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter', split="train", streaming=False)
Using custom data configuration default
Dow... | 40 | "counter" dataset raises an error in normal mode, but not in streaming mode
## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter',... |
https://github.com/huggingface/datasets/issues/2866 | "counter" dataset raises an error in normal mode, but not in streaming mode | Now (on master), we get:
```
import datasets as ds
ds.load_dataset('counter', split="train", streaming=False)
```
```
Using custom data configuration default
Downloading and preparing dataset counter/default (download: 1.29 MiB, generated: 2.48 MiB, post-processed: Unknown size, total: 3.77 MiB) to /home/sle... | ## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter', split="train", streaming=False)
Using custom data configuration default
Dow... | 191 | "counter" dataset raises an error in normal mode, but not in streaming mode
## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter',... |
https://github.com/huggingface/datasets/issues/2866 | "counter" dataset raises an error in normal mode, but not in streaming mode | Note that we might want to open an issue to fix the "counter" dataset by itself, but I let it up to you. | ## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter', split="train", streaming=False)
Using custom data configuration default
Dow... | 23 | "counter" dataset raises an error in normal mode, but not in streaming mode
## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter',... |
https://github.com/huggingface/datasets/issues/2860 | Cannot download TOTTO dataset | Hola @mrm8488, thanks for reporting.
Apparently, the data source host changed their URL one week ago: https://github.com/google-research-datasets/ToTTo/commit/cebeb430ec2a97747e704d16a9354f7d9073ff8f
I'm fixing it. | Error: Couldn't find file at https://storage.googleapis.com/totto/totto_data.zip
`datasets version: 1.11.0`
# How to reproduce:
```py
from datasets import load_dataset
dataset = load_dataset('totto')
```
| 20 | Cannot download TOTTO dataset
Error: Couldn't find file at https://storage.googleapis.com/totto/totto_data.zip
`datasets version: 1.11.0`
# How to reproduce:
```py
from datasets import load_dataset
dataset = load_dataset('totto')
```
Hola @mrm8488, thanks for reporting.
Apparently, the data sourc... |
https://github.com/huggingface/datasets/issues/2842 | always requiring the username in the dataset name when there is one | From what I can understand, you want the saved arrow file directory to have username as well instead of just dataset name if it was downloaded with the user prefix? | Me and now another person have been bitten by the `datasets`'s non-strictness on requiring a dataset creator's username when it's due.
So both of us started with `stas/openwebtext-10k`, somewhere along the lines lost `stas/` and continued using `openwebtext-10k` and it all was good until we published the software an... | 30 | always requiring the username in the dataset name when there is one
Me and now another person have been bitten by the `datasets`'s non-strictness on requiring a dataset creator's username when it's due.
So both of us started with `stas/openwebtext-10k`, somewhere along the lines lost `stas/` and continued using `o... |
https://github.com/huggingface/datasets/issues/2842 | always requiring the username in the dataset name when there is one | I don't think the user cares of how this is done, but the 2nd command should fail, IMHO, as its dataset name is invalid:
```
# first run
python -c "from datasets import load_dataset; load_dataset('stas/openwebtext-10k')"
# now run immediately
python -c "from datasets import load_dataset; load_dataset('openwebtext-... | Me and now another person have been bitten by the `datasets`'s non-strictness on requiring a dataset creator's username when it's due.
So both of us started with `stas/openwebtext-10k`, somewhere along the lines lost `stas/` and continued using `openwebtext-10k` and it all was good until we published the software an... | 115 | always requiring the username in the dataset name when there is one
Me and now another person have been bitten by the `datasets`'s non-strictness on requiring a dataset creator's username when it's due.
So both of us started with `stas/openwebtext-10k`, somewhere along the lines lost `stas/` and continued using `o... |
https://github.com/huggingface/datasets/issues/2839 | OpenWebText: NonMatchingSplitsSizesError | I just regenerated the verifications metadata and noticed that nothing changed: the data file is fine (the checksum didn't change), and the number of examples is still 8013769. Not sure how you managed to get 7982430 examples.
Can you try to delete your cache ( by default at `~/.cache/huggingface/datasets`) and try ... | ## Describe the bug
When downloading `openwebtext`, I'm getting:
```
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=39769494896, num_examples=8013769, dataset_name='openwebtext'), 'recorded': SplitInfo(name='train', num_bytes=39611023912, num_examples=7982430... | 62 | OpenWebText: NonMatchingSplitsSizesError
## Describe the bug
When downloading `openwebtext`, I'm getting:
```
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=39769494896, num_examples=8013769, dataset_name='openwebtext'), 'recorded': SplitInfo(name='train', ... |
https://github.com/huggingface/datasets/issues/2839 | OpenWebText: NonMatchingSplitsSizesError | I'll try without deleting the whole cache (we have large datasets already stored). I was under the impression that `download_mode="force_redownload"` would bypass cache.
Sorry plateform should be linux (Redhat version 8.1) | ## Describe the bug
When downloading `openwebtext`, I'm getting:
```
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=39769494896, num_examples=8013769, dataset_name='openwebtext'), 'recorded': SplitInfo(name='train', num_bytes=39611023912, num_examples=7982430... | 31 | OpenWebText: NonMatchingSplitsSizesError
## Describe the bug
When downloading `openwebtext`, I'm getting:
```
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=39769494896, num_examples=8013769, dataset_name='openwebtext'), 'recorded': SplitInfo(name='train', ... |
https://github.com/huggingface/datasets/issues/2839 | OpenWebText: NonMatchingSplitsSizesError | Sorry I haven't had time to work on this. I'll close and re-open if I can't figure out why I'm having this issue. Thanks for taking a look ! | ## Describe the bug
When downloading `openwebtext`, I'm getting:
```
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=39769494896, num_examples=8013769, dataset_name='openwebtext'), 'recorded': SplitInfo(name='train', num_bytes=39611023912, num_examples=7982430... | 29 | OpenWebText: NonMatchingSplitsSizesError
## Describe the bug
When downloading `openwebtext`, I'm getting:
```
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=39769494896, num_examples=8013769, dataset_name='openwebtext'), 'recorded': SplitInfo(name='train', ... |
https://github.com/huggingface/datasets/issues/2831 | ArrowInvalid when mapping dataset with missing values | Hi ! It fails because of the feature type inference.
Because the first 1000 examples all have null values in the "match" field, then it infers that the type for this field is `null` type before writing the data on disk. But as soon as it tries to map an example with a non-null "match" field, then it fails.
To fix... | ## Describe the bug
I encountered an `ArrowInvalid` when mapping dataset with missing values.
Here are the files for a minimal example. The exception is only thrown when the first line in the csv has a missing value (if you move the last line to the top it isn't thrown).
[data_small.csv](https://github.com/huggingf... | 134 | ArrowInvalid when mapping dataset with missing values
## Describe the bug
I encountered an `ArrowInvalid` when mapping dataset with missing values.
Here are the files for a minimal example. The exception is only thrown when the first line in the csv has a missing value (if you move the last line to the top it isn'... |
https://github.com/huggingface/datasets/issues/2826 | Add a Text Classification dataset: KanHope | Hi ! In your script it looks like you're trying to load the dataset `bn_hate_speech,`, not KanHope.
Moreover the error `KeyError: ' '` means that you have a feature of type ClassLabel, but for a certain example of the dataset, it looks like the label is empty (it's just a string with a space). Can you make sure that... | ## Adding a Dataset
- **Name:** *KanHope*
- **Description:** *A code-mixed English-Kannada dataset for Hope speech detection*
- **Paper:** *https://arxiv.org/abs/2108.04616* (I am the author of the paper}
- **Author:** *[AdeepH](https://github.com/adeepH)*
- **Data:** *https://github.com/adeepH/KanHope/tree/main/d... | 75 | Add a Text Classification dataset: KanHope
## Adding a Dataset
- **Name:** *KanHope*
- **Description:** *A code-mixed English-Kannada dataset for Hope speech detection*
- **Paper:** *https://arxiv.org/abs/2108.04616* (I am the author of the paper}
- **Author:** *[AdeepH](https://github.com/adeepH)*
- **Data:** *... |
https://github.com/huggingface/datasets/issues/2825 | The datasets.map function does not load cached dataset after moving python script | This also happened to me on COLAB.
Details:
I ran the `run_mlm.py` in two different notebooks.
In the first notebook, I do tokenization since I can get 4 CPU cores without any GPUs, and save the cache into a folder which I copy to drive.
In the second notebook, I copy the cache folder from drive and re-run the run... | ## Describe the bug
The datasets.map function caches the processed data to a certain directory. When the map function is called another time with totally the same parameters, the cached data are supposed to be reloaded instead of re-processing. However, it doesn't reuse cached data sometimes. I use the common data pro... | 85 | The datasets.map function does not load cached dataset after moving python script
## Describe the bug
The datasets.map function caches the processed data to a certain directory. When the map function is called another time with totally the same parameters, the cached data are supposed to be reloaded instead of re-pr... |
https://github.com/huggingface/datasets/issues/2825 | The datasets.map function does not load cached dataset after moving python script | #2854 fixed the issue :)
We'll do a new release of `datasets` soon to make the fix available.
In the meantime, feel free to try it out by installing `datasets` from source
If you have other issues or any question, feel free to re-open the issue :) | ## Describe the bug
The datasets.map function caches the processed data to a certain directory. When the map function is called another time with totally the same parameters, the cached data are supposed to be reloaded instead of re-processing. However, it doesn't reuse cached data sometimes. I use the common data pro... | 47 | The datasets.map function does not load cached dataset after moving python script
## Describe the bug
The datasets.map function caches the processed data to a certain directory. When the map function is called another time with totally the same parameters, the cached data are supposed to be reloaded instead of re-pr... |
https://github.com/huggingface/datasets/issues/2823 | HF_DATASETS_CACHE variable in Windows | Agh - I'm a muppet. No quote marks are needed.
set HF_DATASETS_CACHE = C:\Datasets
works as intended. | I can't seem to use a custom Cache directory in Windows. I have tried:
set HF_DATASETS_CACHE = "C:\Datasets"
set HF_DATASETS_CACHE = "C:/Datasets"
set HF_DATASETS_CACHE = "C:\\Datasets"
set HF_DATASETS_CACHE = "r'C:\Datasets'"
set HF_DATASETS_CACHE = "\Datasets"
set HF_DATASETS_CACHE = "/Datasets"
In each in... | 17 | HF_DATASETS_CACHE variable in Windows
I can't seem to use a custom Cache directory in Windows. I have tried:
set HF_DATASETS_CACHE = "C:\Datasets"
set HF_DATASETS_CACHE = "C:/Datasets"
set HF_DATASETS_CACHE = "C:\\Datasets"
set HF_DATASETS_CACHE = "r'C:\Datasets'"
set HF_DATASETS_CACHE = "\Datasets"
set HF_DA... |
https://github.com/huggingface/datasets/issues/2821 | Cannot load linnaeus dataset | Thanks for reporting ! #2852 fixed this error
We'll do a new release of `datasets` soon :) | ## Describe the bug
The [linnaeus](https://huggingface.co/datasets/linnaeus) dataset cannot be loaded. To reproduce:
```
from datasets import load_dataset
datasets = load_dataset("linnaeus")
```
This results in:
```
Downloading and preparing dataset linnaeus/linnaeus (download: 17.36 MiB, generated: 8.74 MiB,... | 17 | Cannot load linnaeus dataset
## Describe the bug
The [linnaeus](https://huggingface.co/datasets/linnaeus) dataset cannot be loaded. To reproduce:
```
from datasets import load_dataset
datasets = load_dataset("linnaeus")
```
This results in:
```
Downloading and preparing dataset linnaeus/linnaeus (download: ... |
https://github.com/huggingface/datasets/issues/2820 | Downloading “reddit” dataset keeps timing out. | ```
Using custom data configuration default
Downloading and preparing dataset reddit/default (download: 2.93 GiB, generated: 17.64 GiB, post-processed: Unknown size, total: 20.57 GiB) to /Volumes/My Passport for Mac/og-chat-data/reddit/default/1.0.0/98ba5abea674d3178f7588aa6518a5510dc0c6fa8176d9653a3546d5afcb3969...
... | ## Describe the bug
A clear and concise description of what the bug is.
Everytime I try and download the reddit dataset it times out before finishing and I have to try again.
There is some timeout error that I will post once it happens again.
## Steps to reproduce the bug
```python
from datasets import load_d... | 646 | Downloading “reddit” dataset keeps timing out.
## Describe the bug
A clear and concise description of what the bug is.
Everytime I try and download the reddit dataset it times out before finishing and I have to try again.
There is some timeout error that I will post once it happens again.
## Steps to reproduc... |
https://github.com/huggingface/datasets/issues/2820 | Downloading “reddit” dataset keeps timing out. | It also doesn't seem to be "smart caching" and I received an error about a file not being found... | ## Describe the bug
A clear and concise description of what the bug is.
Everytime I try and download the reddit dataset it times out before finishing and I have to try again.
There is some timeout error that I will post once it happens again.
## Steps to reproduce the bug
```python
from datasets import load_d... | 19 | Downloading “reddit” dataset keeps timing out.
## Describe the bug
A clear and concise description of what the bug is.
Everytime I try and download the reddit dataset it times out before finishing and I have to try again.
There is some timeout error that I will post once it happens again.
## Steps to reproduc... |
https://github.com/huggingface/datasets/issues/2820 | Downloading “reddit” dataset keeps timing out. | To be clear, the error I get when I try to "re-instantiate" the download after failure is:
```
OSError: Cannot find data file.
Original error:
[Errno 20] Not a directory: <HOME>/.cache/huggingface/datasets/downloads/1ec12301abba4daa60eb3a90e53529b5b173296b22dc3bef3186e205c75e594c/corpus-webis-tldr-17.json'
``` | ## Describe the bug
A clear and concise description of what the bug is.
Everytime I try and download the reddit dataset it times out before finishing and I have to try again.
There is some timeout error that I will post once it happens again.
## Steps to reproduce the bug
```python
from datasets import load_d... | 32 | Downloading “reddit” dataset keeps timing out.
## Describe the bug
A clear and concise description of what the bug is.
Everytime I try and download the reddit dataset it times out before finishing and I have to try again.
There is some timeout error that I will post once it happens again.
## Steps to reproduc... |
https://github.com/huggingface/datasets/issues/2820 | Downloading “reddit” dataset keeps timing out. | Hi ! Since https://github.com/huggingface/datasets/pull/2803 we've changed the time out from 10sec to 100sec.
This should prevent the `ReadTimeoutError`. Feel free to try it out by installing `datasets` from source
```
pip install git+https://github.com/huggingface/datasets.git
```
When re-running your code you ... | ## Describe the bug
A clear and concise description of what the bug is.
Everytime I try and download the reddit dataset it times out before finishing and I have to try again.
There is some timeout error that I will post once it happens again.
## Steps to reproduce the bug
```python
from datasets import load_d... | 111 | Downloading “reddit” dataset keeps timing out.
## Describe the bug
A clear and concise description of what the bug is.
Everytime I try and download the reddit dataset it times out before finishing and I have to try again.
There is some timeout error that I will post once it happens again.
## Steps to reproduc... |
https://github.com/huggingface/datasets/issues/2820 | Downloading “reddit” dataset keeps timing out. | @lhoestq thanks for the update. The directory specified by the OSError ie.
```
1ec12301abba4daa60eb3a90e53529b5b173296b22dc3bef3186e205c75e594c/corpus-webis-tldr-17.json
```
was not actually in that directory so I can't delete it. | ## Describe the bug
A clear and concise description of what the bug is.
Everytime I try and download the reddit dataset it times out before finishing and I have to try again.
There is some timeout error that I will post once it happens again.
## Steps to reproduce the bug
```python
from datasets import load_d... | 26 | Downloading “reddit” dataset keeps timing out.
## Describe the bug
A clear and concise description of what the bug is.
Everytime I try and download the reddit dataset it times out before finishing and I have to try again.
There is some timeout error that I will post once it happens again.
## Steps to reproduc... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.