repo stringclasses 32
values | instance_id stringlengths 13 37 | base_commit stringlengths 40 40 | patch stringlengths 1 1.89M | test_patch stringclasses 1
value | problem_statement stringlengths 304 69k | hints_text stringlengths 0 246k | created_at stringlengths 20 20 | version stringclasses 1
value | FAIL_TO_PASS stringclasses 1
value | PASS_TO_PASS stringclasses 1
value | environment_setup_commit stringclasses 1
value | traceback stringlengths 64 23.4k | __index_level_0__ int64 29 19k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
DataDog/integrations-core | DataDog__integrations-core-446 | 0b9be7366a08b2fa1b83c036d823d8848762770f | diff --git a/postgres/check.py b/postgres/check.py
--- a/postgres/check.py
+++ b/postgres/check.py
@@ -651,14 +651,17 @@ def _get_custom_metrics(self, custom_metrics, key):
self.log.debug("Metric: {0}".format(m))
- for ref, (_, mtype) in m['metrics'].iteritems():
- cap_mtype =... | [postgres] Improve config reading errors
I had this `postgres.yaml`:
```
init_config:
instances:
- host: pepepe
...
custom_metrics:
- query: SELECT %s FROM pg_locks WHERE granted = false;
metrics:
count(distinct pid): [postgresql.connections_locked]
descriptors: []
... | Thanks a lot for this report @mausch!
We can't validate the config in a consistent manner, which makes something like this tricky to make the error better. We will work on making this a lot better in the future.
However, what we can do in the very near future is make the documentation both online and in the confi... | 2017-05-29T13:10:25Z | [] | [] |
Traceback (most recent call last):
File "/opt/datadog-agent/agent/checks/__init__.py", line 745, in run
self.check(copy.deepcopy(instance))
File "/opt/datadog-agent/agent/checks.d/postgres.py", line 606, in check
custom_metrics = self._get_custom_metrics(instance.get('custom_metrics', []), key)
File... | 29 | |||
DataDog/integrations-core | DataDog__integrations-core-5659 | 3b850d826a2f245e9dcc8a1d87d5e2343123882e | diff --git a/datadog_checks_base/datadog_checks/base/checks/win/wmi/__init__.py b/datadog_checks_base/datadog_checks/base/checks/win/wmi/__init__.py
--- a/datadog_checks_base/datadog_checks/base/checks/win/wmi/__init__.py
+++ b/datadog_checks_base/datadog_checks/base/checks/win/wmi/__init__.py
@@ -114,14 +114,15 @@ def... | WMI integration throws Exception: SWbemLocator Not enough storage is available to process this command
```text
===============
Agent (v7.16.0)
===============
Status date: 2020-02-05 15:56:45.740020 GMT
Agent start: 2020-02-05 15:03:08.601503 GMT
Pid: 25188
Go Version: go1.12.9
Python Version: 3.7... | 2020-02-06T12:16:14Z | [] | [] |
Traceback (most recent call last):
File "C:\Program Files\Datadog\Datadog Agent\embedded3\lib\site-packages\datadog_checks\base\checks\win\wmi\sampler.py", line 464, in _query
raw_results = self.get_connection().ExecQuery(wql, "WQL", query_flags)
File "C:\Program Files\Datadog\Datadog Agent\embedded3\lib\si... | 36 | ||||
DataDog/integrations-core | DataDog__integrations-core-9857 | 8006a053c108af2cf1988efe23db8f58df8262dc | diff --git a/mongo/datadog_checks/mongo/collectors/custom_queries.py b/mongo/datadog_checks/mongo/collectors/custom_queries.py
--- a/mongo/datadog_checks/mongo/collectors/custom_queries.py
+++ b/mongo/datadog_checks/mongo/collectors/custom_queries.py
@@ -56,8 +56,10 @@ def _collect_custom_metrics_for_query(self, api, r... | MongoDB: Collection-agnostic aggregations like $currentOp doesn't work
Agent 7.29.1, running on Ubuntu Linux 18.04.
**Steps to reproduce the issue:**
Add the following configuration to `/etc/datadog-agent/conf.d/mongo.d/conf.yaml` and restart the agent:
```
custom_queries:
- metric_prefix: mongodb.cust... | Hey @atodd-circleci
Acknowledging the limitation, I'm able to reproduce.
I'm thinking we should be able to work around that by putting `$cmd.aggregate` instead of "1" as the collection name here: https://github.com/DataDog/integrations-core/blob/master/mongo/datadog_checks/mongo/collectors/custom_queries.py#L113 but... | 2021-08-05T15:17:59Z | [] | [] |
Traceback (most recent call last):
2021-07-22 06:44:38 UTC | CORE | WARN | (pkg/collector/python/datadog_agent.go:122 in LogMessage) | mongo:375a6f2e54dabf11 | (custom_queries.py:153) | Errors while collecting custom metrics with prefix mongodb.custom.queries_slower_than_60sec
TypeError: name must be an instance of ... | 58 | |||
Lightning-AI/lightning | Lightning-AI__lightning-1360 | ebd9fc9530242e1c9b5f3093dc62ceb4185735b0 | diff --git a/pytorch_lightning/loggers/wandb.py b/pytorch_lightning/loggers/wandb.py
--- a/pytorch_lightning/loggers/wandb.py
+++ b/pytorch_lightning/loggers/wandb.py
@@ -65,10 +65,11 @@ def __init__(self, name: Optional[str] = None, save_dir: Optional[str] = None,
def __getstate__(self):
state = self._... | WandbLogger cannot be used with 'ddp'
<!--
### Common bugs:
1. Tensorboard not showing in Jupyter-notebook see [issue 79](https://github.com/PyTorchLightning/pytorch-lightning/issues/79).
2. PyTorch 1.1.0 vs 1.2.0 support [see FAQ](https://github.com/PyTorchLightning/pytorch-lightning#faq)
-->
## 🐛 Bug... | Hi! thanks for your contribution!, great first issue!
Some hacky solutions: calling `reinit=True` for wandb, adding or this terrible hack:
```python
def init_ddp_connection(self, *args, **kwargs):
super().init_ddp_connection(*args, **kwargs)
if torch.distributed.get_rank() == 0:
import wandb
... | 2020-04-03T13:32:07Z | [] | [] |
Traceback (most recent call last):
File "/home/rmrao/anaconda3/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "/home/rmrao/anaconda3/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 331, in ddp_train
self.run_pretrai... | 104 | |||
Lightning-AI/lightning | Lightning-AI__lightning-1377 | b8ff9bc1d242a18f5e7147f34d63f43fcdd0e50a | diff --git a/pytorch_lightning/loggers/tensorboard.py b/pytorch_lightning/loggers/tensorboard.py
--- a/pytorch_lightning/loggers/tensorboard.py
+++ b/pytorch_lightning/loggers/tensorboard.py
@@ -9,6 +9,7 @@
from torch.utils.tensorboard import SummaryWriter
from pytorch_lightning.loggers.base import LightningLoggerB... | Tensorboard logger error: lightning_logs directory not exists in multi-node DDP on nodes with rank != 0
## 🐛 Bug
In multi-node DDP train mode on all nodes except rank 0 errors appears at the start of the training caused by accessing lightning_logs directory in tensorboard logger which is not exist at the moment.
... | 2020-04-04T16:35:26Z | [] | [] |
Traceback (most recent call last):
File "/home/ubuntu/anaconda3/envs/nightly_pt/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "/home/ubuntu/anaconda3/envs/nightly_pt/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 342, ... | 105 | ||||
Lightning-AI/lightning | Lightning-AI__lightning-1385 | 4ed3027309fe1882554e9b7ffe33f1aa92c88106 | diff --git a/pytorch_lightning/trainer/distrib_data_parallel.py b/pytorch_lightning/trainer/distrib_data_parallel.py
--- a/pytorch_lightning/trainer/distrib_data_parallel.py
+++ b/pytorch_lightning/trainer/distrib_data_parallel.py
@@ -363,15 +363,19 @@ def load_spawn_weights(self, original_model):
:param model... | Trainer DDP should invoke load_spawn_weights() only in proc_rank == 0
## 🐛 Bug
Trainer DDP load_spawn_weights should happen only in proc_rank == 0 since only in this process (node) `save_spawn_weights` actually saves checkpoint
### To Reproduce
Steps to reproduce the behavior:
1. setup two-node cluster.
... | 2020-04-05T23:51:47Z | [] | [] |
Traceback (most recent call last):
File "app.py", line 166, in <module>
main_() # pylint: disable=no-value-for-parameter
File "app.py", line 162, in main_
trainer.fit(model)
File "/home/ubuntu/anaconda3/envs/nightly_pt/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 593, in ... | 107 | ||||
Lightning-AI/lightning | Lightning-AI__lightning-1423 | 3f1e4b953f84ecdac7dada0c6b57d908efc9c3d3 | diff --git a/pytorch_lightning/trainer/distrib_parts.py b/pytorch_lightning/trainer/distrib_parts.py
--- a/pytorch_lightning/trainer/distrib_parts.py
+++ b/pytorch_lightning/trainer/distrib_parts.py
@@ -566,7 +566,7 @@ def check_gpus_data_type(gpus):
:return: return unmodified gpus variable
"""
- if gpus... | Use isinstance() instead of type() in trainer.distrib_parts.check_gpus_data_type
<!--
### Common bugs:
1. Tensorboard not showing in Jupyter-notebook see [issue 79](https://github.com/PyTorchLightning/pytorch-lightning/issues/79).
2. PyTorch 1.1.0 vs 1.2.0 support [see FAQ](https://github.com/PyTorchLightning/p... | Hi! thanks for your contribution!, great first issue!
I do like this shift from `type` to an `isinstance` which extend accepted types also to child...
as always a good PR is always welcome
cc: @PyTorchLightning/core-contributors @jeremyjordan | 2020-04-09T09:44:35Z | [] | [] |
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/anaconda/miniconda3/envs/ai/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 366, in __init__
self.data_parallel_device_ids = parse_gpu_ids(self.gpus)
File "/opt/anaconda/miniconda3/envs/ai/lib/python... | 111 | |||
Lightning-AI/lightning | Lightning-AI__lightning-1513 | 9b31272cf0f3079a244944096b4a81eec20fe555 | diff --git a/pytorch_lightning/trainer/data_loading.py b/pytorch_lightning/trainer/data_loading.py
--- a/pytorch_lightning/trainer/data_loading.py
+++ b/pytorch_lightning/trainer/data_loading.py
@@ -61,6 +61,7 @@ class TrainerDataLoadingMixin(ABC):
train_percent_check: float
val_percent_check: float
test... | 0.7.3 breaks reusable dataloaders in DDP
## 🐛 Bug
0.7.3 breaks reusable dataloaders in DDP
```
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning... | ummm yeah. we should change the dataloader swap with swapping a dataloader init from the class or not swipe the dataloder at all but set the correct sampler.
@justusschock any ideas?
This is a mixture of #1425 and #1346
And I don't think we can prevent this when we want to set correct samplers also in subclass... | 2020-04-17T07:59:07Z | [] | [] |
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 345, in ddp_train
self.run_pretrain_routine(model)
Fi... | 128 | |||
Lightning-AI/lightning | Lightning-AI__lightning-1582 | 5ab5084f7b9e137c1e7769228aaed8da92eaad6e | diff --git a/pytorch_lightning/loggers/base.py b/pytorch_lightning/loggers/base.py
--- a/pytorch_lightning/loggers/base.py
+++ b/pytorch_lightning/loggers/base.py
@@ -280,6 +280,7 @@ class LoggerCollection(LightningLoggerBase):
Args:
logger_iterable: An iterable collection of loggers
"""
+
def _... | After update from 0.5.x to 0.7.3 merge_dicts #1278 sometimes breaks training
## 🐛 Bug
After I updated from a quite old lightning version to the newest one, I sometimes get a TypeError from merge_dicts. I guess it's related to this MR #1278 . This Type error is deterministic, meaning it always occurs at the same glo... | Did you passed any 'agg_key_funcs' to the logger class? If I understand the code correctly, by default np.mean is used to aggregate the dict values returned during training. Maybe numpy tries in the mean function to *add* (+ func) values which can't be summed up?
Can you maybe post the code snippets where you return... | 2020-04-23T20:27:40Z | [] | [] |
Traceback (most recent call last):████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:00<00:00, 8.76it/s]
File "/home/sebastian/.pyenv/versions/3.7.7/lib/python3.7/multiprocessing/util.py", line 277, in _run_finalizer... | 140 | |||
Lightning-AI/lightning | Lightning-AI__lightning-1589 | 79196246cfcc73391de1be71bfb27d4366daf75a | diff --git a/pytorch_lightning/trainer/distrib_parts.py b/pytorch_lightning/trainer/distrib_parts.py
--- a/pytorch_lightning/trainer/distrib_parts.py
+++ b/pytorch_lightning/trainer/distrib_parts.py
@@ -461,10 +461,15 @@ def __transfer_data_to_device(self, batch, device, gpu_id=None):
# when tuple
i... | Named converted to regular tuples when sent to the gpu.
<!--
### Common bugs:
1. Tensorboard not showing in Jupyter-notebook see [issue 79](https://github.com/PyTorchLightning/pytorch-lightning/issues/79).
2. PyTorch 1.1.0 vs 1.2.0 support [see FAQ](https://github.com/PyTorchLightning/pytorch-lightning#faq) ... | 2020-04-24T03:49:56Z | [] | [] |
Traceback (most recent call last):
File "demo.py", line 48, in <module>
pl.Trainer(max_epochs=20, gpus=1).fit(module)
File "/home/n/repos/pytorch-lightning/pytorch_lightning/trainer/trainer.py", line 749, in fit
self.single_gpu_train(model)
File "/home/n/repos/pytorch-lightning/pytorch_lightning/tra... | 141 | ||||
Lightning-AI/lightning | Lightning-AI__lightning-2014 | 8b9b923ca8ad9fdb0ae22928de0029e7c2e7a782 | diff --git a/pl_examples/domain_templates/computer_vision_fine_tuning.py b/pl_examples/domain_templates/computer_vision_fine_tuning.py
--- a/pl_examples/domain_templates/computer_vision_fine_tuning.py
+++ b/pl_examples/domain_templates/computer_vision_fine_tuning.py
@@ -450,5 +450,4 @@ def get_args() -> argparse.Namesp... | Bug in GAN example
Bug in https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pl_examples/domain_templates/generative_adversarial_net.py
When I run `python generative_adversarial_net.py `
I get
```
Traceback (most recent call last):
File "generative_adversarial_net.py", line 218, in <module>
... | Replace with `model = GAN(**vars(hparams))` [here](https://github.com/PyTorchLightning/pytorch-lightning/blob/fdbbe968256f6c68a5dbb840a2004b77a618ef61/pl_examples/domain_templates/generative_adversarial_net.py#L192). Same bug in [imagenet script](https://github.com/PyTorchLightning/pytorch-lightning/blob/fdbbe968256f6c... | 2020-05-30T12:26:09Z | [] | [] |
Traceback (most recent call last):
File "generative_adversarial_net.py", line 218, in <module>
main(hparams)
File "generative_adversarial_net.py", line 192, in main
model = GAN(hparams)
File "generative_adversarial_net.py", line 90, in __init__
self.generator = Generator(latent_dim=self.latent_... | 177 | |||
Lightning-AI/lightning | Lightning-AI__lightning-2115 | 0bd7780adc4d68007946cf380a6a24e1a08d99d1 | diff --git a/pytorch_lightning/trainer/data_loading.py b/pytorch_lightning/trainer/data_loading.py
--- a/pytorch_lightning/trainer/data_loading.py
+++ b/pytorch_lightning/trainer/data_loading.py
@@ -139,6 +139,7 @@ def _get_distributed_sampler(self, dataloader):
else:
world_size = {
... | verify ddp and ddp_spawn implementation
CUDA error: an illegal memory access was encountered after updating to the latest stable packages
Can anyone help with this CUDA error: an illegal memory access was encountered ??
It runs fine for several iterations...
## 🐛 Bug
```
Traceback (most recent call last):
... | 2020-06-08T15:37:16Z | [] | [] |
Traceback (most recent call last):
File "train_gpu.py", line 237, in <module>
main_local(hparam_trial)
File "train_gpu.py", line 141, in main_local
trainer.fit(model)
File "/shared/storage/cs/staffstore/username/anaconda3/envs/sh1/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py",... | 188 | ||||
Lightning-AI/lightning | Lightning-AI__lightning-2216 | e780072961562ab1d89bad871918fcc422ad0ac6 | diff --git a/pytorch_lightning/loggers/base.py b/pytorch_lightning/loggers/base.py
--- a/pytorch_lightning/loggers/base.py
+++ b/pytorch_lightning/loggers/base.py
@@ -3,13 +3,11 @@
import operator
from abc import ABC, abstractmethod
from argparse import Namespace
-from typing import Union, Optional, Dict, Iterable, ... | Hydra MLFlow Clash
<!--
### Common bugs:
1. Tensorboard not showing in Jupyter-notebook see [issue 79](https://github.com/PyTorchLightning/pytorch-lightning/issues/79).
2. PyTorch 1.1.0 vs 1.2.0 support [see FAQ](https://github.com/PyTorchLightning/pytorch-lightning#faq)
-->
## 🐛 Bug
When using the ... | Hi! thanks for your contribution!, great first issue!
> Check whether the instance if `dict` or `DictConfig` in the given line.
@ssakhavi that sounds reasonable solution, mind sending a PR - fix and its test? | 2020-06-17T03:24:11Z | [] | [] |
Traceback (most recent call last):
File "/home/siavash/KroniKare/kwae2/kwae_ma/models/pl_train_segmentation_model.py", line 115, in <module>
main()
File "/home/siavash/anaconda3/envs/kwae-ma/lib/python3.7/site-packages/hydra/main.py", line 24, in decorated_main
strict=strict,
File "/home/siavash/ana... | 201 | |||
Lightning-AI/lightning | Lightning-AI__lightning-2255 | b5a2f1ec4463064394dc6d977ffd246aa11158af | diff --git a/pl_examples/basic_examples/gpu_template.py b/pl_examples/basic_examples/gpu_template.py
--- a/pl_examples/basic_examples/gpu_template.py
+++ b/pl_examples/basic_examples/gpu_template.py
@@ -23,7 +23,7 @@ def main(hparams):
# ------------------------
# 1 INIT LIGHTNING MODEL
# ---------------... | CPU/GPU Template
## 🐛 Bug
The GPU or CPU template do not run currently on master after changes including the setup hook.
```
python -m pl_examples.basic_examples.gpu_template --gpus 4 --distributed_backend ddp
python -m pl_examples.basic_examples.cpu_template
```
CPU Template Error:
```
Traceback (m... | try again?
> try again?
it is in master now... :( | 2020-06-19T02:43:10Z | [] | [] |
Traceback (most recent call last):
File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/anthony/Downloads/pytorch-lightning/pl_examples/basic_examples/cpu_template.py",... | 209 | |||
Lightning-AI/lightning | Lightning-AI__lightning-2293 | 3256fe4e5a405db1ab00d4cf4d48cbbfc7730959 | diff --git a/pytorch_lightning/trainer/data_loading.py b/pytorch_lightning/trainer/data_loading.py
--- a/pytorch_lightning/trainer/data_loading.py
+++ b/pytorch_lightning/trainer/data_loading.py
@@ -52,6 +52,8 @@ def _has_len(dataloader: DataLoader) -> bool:
return True
except TypeError:
return F... | _has_len does not handle NotImplementedError (raised by torchtext)
<!--
### Common bugs:
1. Tensorboard not showing in Jupyter-notebook see [issue 79](https://github.com/PyTorchLightning/pytorch-lightning/issues/79).
2. PyTorch 1.1.0 vs 1.2.0 support [see FAQ](https://github.com/PyTorchLightning/pytorch-lightni... | 2020-06-19T23:57:59Z | [] | [] |
Traceback (most recent call last):
File "/Users/thomas/scm/OakDataPrep/oakSkipThoughtTrainer.py", line 18, in <module>
trainer.fit(model)
File "/Users/thomas/virtualenv/Python3/PyTorch/env/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 952, in fit
self.run_pretrain_routine(mode... | 213 | ||||
Lightning-AI/lightning | Lightning-AI__lightning-2356 | 220bb6db57e7181e857a128e245ce242b6cf429f | diff --git a/pytorch_lightning/trainer/optimizers.py b/pytorch_lightning/trainer/optimizers.py
--- a/pytorch_lightning/trainer/optimizers.py
+++ b/pytorch_lightning/trainer/optimizers.py
@@ -111,15 +111,25 @@ def configure_schedulers(self, schedulers: list):
def reinit_scheduler_properties(self, optimizers: list, ... | Trainer(precision=16) fails with optim.lr_scheduler.ReduceLROnPlateau
<!--
### Common bugs:
1. Tensorboard not showing in Jupyter-notebook see [issue 79](https://github.com/PyTorchLightning/pytorch-lightning/issues/79).
2. PyTorch 1.1.0 vs 1.2.0 support [see FAQ](https://github.com/PyTorchLightning/pytorch-ligh... | Hi! thanks for your contribution!, great first issue!
@naokishibuya good catch. It seems like a problem that should be solved upstream in pytorch, but for now we can solve this locally. Would you be up for a PR?
When I tried this fix, it solved the error but unfortunately `ReduceLROnPlateau` stopped working for me (i.e... | 2020-06-25T02:42:06Z | [] | [] |
Traceback (most recent call last):
File "main.py", line 65, in <module>
main() ... | 219 | |||
Lightning-AI/lightning | Lightning-AI__lightning-2358 | a5f45787eabddfec4559983f8e6ba1c8317f62f1 | diff --git a/pl_examples/basic_examples/gpu_template.py b/pl_examples/basic_examples/gpu_template.py
--- a/pl_examples/basic_examples/gpu_template.py
+++ b/pl_examples/basic_examples/gpu_template.py
@@ -61,7 +61,8 @@ def main(hparams):
'--distributed_backend',
type=str,
default='dp',
- ... | accuracy metric dosen't support windows
## 🐛 Bug
Pytorch Metric.Accuracy uses `ReduceOp` from 'torch.distribution' but torch.distributrion doesn't support `windows`
- https://github.com/pytorch/pytorch/blob/cf8a9b50cacb1702f5855859c657a5358976437b/torch/distributed/__init__.py#L10 : `torch.distributed is availabl... | 2020-06-25T07:51:08Z | [] | [] |
Traceback (most recent call last):
File "test.py", line 11, in <module>
from pytorch_lightning.metrics.functional import accuracy
File "C:\Users\dcho\Anaconda3\envs\torch_py36\lib\site-packages\pytorch_lightning\metrics\__init__.py", line 1, in <module>
from pytorch_lightning.metrics.converters import ... | 220 | ||||
Lightning-AI/lightning | Lightning-AI__lightning-2360 | f2710bb500be017d48ccc6cf596bbed6cc9bdad5 | diff --git a/pytorch_lightning/trainer/trainer.py b/pytorch_lightning/trainer/trainer.py
--- a/pytorch_lightning/trainer/trainer.py
+++ b/pytorch_lightning/trainer/trainer.py
@@ -1193,7 +1193,8 @@ def test(
self.teardown('test')
if self.is_function_implemented('teardown'):
- self.model.te... | AttributeError: 'LightningDataParallel' object has no attribute 'teardown'
<!--
### Common bugs:
1. Tensorboard not showing in Jupyter-notebook see [issue 79](https://github.com/PyTorchLightning/pytorch-lightning/issues/79).
2. PyTorch 1.1.0 vs 1.2.0 support [see FAQ](https://github.com/PyTorchLightning/pytorch... | Hi! thanks for your contribution!, great first issue!
+1 on this issue.
Also confirm this issue. | 2020-06-25T14:11:42Z | [] | [] |
Traceback (most recent call last):
File "run_kitti.py", line 351, in <module>
trainer.test(model)
File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1198, in test
self.model.teardown('test')
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py... | 221 | |||
Lightning-AI/lightning | Lightning-AI__lightning-2428 | a75398530c3447ecf13f043a1bc817929b90fd65 | diff --git a/pytorch_lightning/trainer/training_loop.py b/pytorch_lightning/trainer/training_loop.py
--- a/pytorch_lightning/trainer/training_loop.py
+++ b/pytorch_lightning/trainer/training_loop.py
@@ -776,6 +776,7 @@ def optimizer_closure(self, split_batch, batch_idx, opt_idx, optimizer, hiddens)
# PROCE... | training_epoch_end's outputs doesn't have 'loss' key
pytorch-lightning: build from master
```
Traceback (most recent call last):
File "main.py", line 140, in <module>
main(hparams)
File "main.py", line 72, in main
trainer.fit(model)
File "/mnt/lustre/maxiao1/anaconda3/lib/python3.7/site-packages/py... | Try: `avg_loss = torch.stack([x['batch_loss'] for x in outputs]).mean()`
Thanks, it works
but 'train_acc' key doesn't exist, neither do `batch_train_acc`. How to access other keys returned in training_step?
As of now in lightning you can access them using `x['callback_metrics']['loss']` and `x['callback_metrics']['tra... | 2020-06-30T13:23:18Z | [] | [] |
Traceback (most recent call last):
File "main.py", line 140, in <module>
main(hparams)
File "main.py", line 72, in main
trainer.fit(model)
File "/mnt/lustre/maxiao1/anaconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 881, in fit
self.ddp_train(task, model)
File ... | 230 | |||
Lightning-AI/lightning | Lightning-AI__lightning-2433 | d4a02e3bd8471946c606fef7512ce44d42f07d3a | diff --git a/pytorch_lightning/trainer/training_loop.py b/pytorch_lightning/trainer/training_loop.py
--- a/pytorch_lightning/trainer/training_loop.py
+++ b/pytorch_lightning/trainer/training_loop.py
@@ -802,9 +802,22 @@ def optimizer_closure(self, split_batch, batch_idx, opt_idx, optimizer, hiddens)
if sel... | 0.8.2 calls backward on '_GeneratorContextManager'
## 🐛 Bug
0.8.2 calls backward on '_GeneratorContextManager' and crashes training.
0.8.1 works correctly. my `training_step` returns `{'loss':loss, 'log':{'learn_rate':self.lr}}`
```
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-pack... | did you override optimizer step?
could you try master? we just pushed a fix to a typo we had
Can confirm this happens on 0.8.3
ok. Can you post a colab example that replicates this?
@Anjum48 @s-rog
colab please
@williamFalcon my optimizer step was untouched, I can't run more testing atm but I'll get to it as soon as... | 2020-06-30T18:33:09Z | [] | [] |
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 20, in _wrap
fn(i, *args)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 538, in ddp_train
self.run_pretrain_routine(model)
Fi... | 231 | |||
Lightning-AI/lightning | Lightning-AI__lightning-2565 | e1bc208f66891e22f0139619a1be5c06235a0f34 | diff --git a/pytorch_lightning/trainer/distrib_data_parallel.py b/pytorch_lightning/trainer/distrib_data_parallel.py
--- a/pytorch_lightning/trainer/distrib_data_parallel.py
+++ b/pytorch_lightning/trainer/distrib_data_parallel.py
@@ -189,6 +189,7 @@ class TrainerDDPMixin(ABC):
num_nodes: int
node_rank: int
... | Can't use None (anymore) in checkpoint_callback
## 🐛 Bug
using None in checkpoint_callback now errors out
```
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 20, in _wrap
fn(i, *args)
... | 2020-07-09T10:46:34Z | [] | [] |
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 20, in _wrap
fn(i, *args)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 562, in ddp_train
q.put(self.checkpoint_callback.best_mo... | 250 | ||||
Lightning-AI/lightning | Lightning-AI__lightning-2572 | c197b74289997fa11cd372b51adb637f3e3846ec | diff --git a/pytorch_lightning/core/memory.py b/pytorch_lightning/core/memory.py
--- a/pytorch_lightning/core/memory.py
+++ b/pytorch_lightning/core/memory.py
@@ -209,7 +209,7 @@ def _forward_example_input(self) -> None:
input_ = model.example_input_array
input_ = model.transfer_batch_to_device(input_... | TPU fp16 requires apex installed
<!--
## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
When I tried to use precision=16 on TPU, pytorch-lightning is trying to find amp, which is unnecessary.
The backtrace is
```
GPU available: False, used: False
TPU available: True, using: 8 TPU cores... | Hi! thanks for your contribution!, great first issue!
If you want to do 16 bit precision training, you either need to have the nightly version of pytorch install or have apex installed. Based on the traceback I guess that you do not have any of them.
I could get this working using nightly version of pytorch:
```
pl.... | 2020-07-10T01:17:22Z | [] | [] |
Traceback (most recent call last):
File "bert_ner/light/fp16_debug.py", line 16, in <module>
trainer = pl.Trainer(tpu_cores=8, precision=16)
File "/anaconda3/envs/torch-xla-1.5/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 607, in __init__
self.init_amp()
File "/anaconda3/e... | 252 |
End of preview. Expand in Data Studio
- Downloads last month
- 22