Skip to content

CUDA Error in batchNorm #42588

@ghk829

Description

@ghk829

🐛 Bug


RuntimeError Traceback (most recent call last)
in
----> 1 loss.backward()

/opt/conda/lib/python3.7/site-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph)
183 products. Defaults to False.
184 """
--> 185 torch.autograd.backward(self, gradient, retain_graph, create_graph)
186
187 def register_hook(self, hook):

/opt/conda/lib/python3.7/site-packages/torch/autograd/init.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
125 Variable._execution_engine.run_backward(
126 tensors, grad_tensors, retain_graph, create_graph,
--> 127 allow_unreachable=True) # allow_unreachable flag
128
129

RuntimeError: Expected grad_output->is_contiguous(grad_output->suggest_memory_format()) to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
Exception raised from cudnn_batch_norm_backward at /opt/conda/conda-bld/pytorch_1595629403081/work/aten/src/ATen/native/cudnn/BatchNorm.cpp:249 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x4d (0x7f993197377d in /opt/conda/lib/python3.7/site-packages/torch/lib/libc10.so)
frame #1: at::native::cudnn_batch_norm_backward(at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, double, at::Tensor const&) + 0x25b2 (0x7f9932a79db2 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cuda.so)
frame #2: + 0xd1150a (0x7f9932aea50a in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cuda.so)
frame #3: + 0xd3fa3b (0x7f9932b18a3b in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cuda.so)
frame #4: at::cudnn_batch_norm_backward(at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, double, at::Tensor const&) + 0x1ef (0x7f9964cff10f in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
frame #5: + 0x2b59cff (0x7f9966946cff in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
frame #6: + 0x2b6b21b (0x7f996695821b in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
frame #7: at::cudnn_batch_norm_backward(at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, double, at::Tensor const&) + 0x1ef (0x7f9964cff10f in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
frame #8: torch::autograd::generated::CudnnBatchNormBackward::apply(std::vector<at::Tensor, std::allocatorat::Tensor >&&) + 0x42c (0x7f99668a9fec in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
frame #9: + 0x30d1017 (0x7f9966ebe017 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
frame #10: torch::autograd::Engine::evaluate_function(std::shared_ptrtorch::autograd::GraphTask&, torch::autograd::Node*, torch::autograd::InputBuffer&, std::shared_ptrtorch::autograd::ReadyQueue const&) + 0x1400 (0x7f9966eb9860 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
frame #11: torch::autograd::Engine::thread_main(std::shared_ptrtorch::autograd::GraphTask const&) + 0x451 (0x7f9966eba401 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
frame #12: torch::autograd::Engine::thread_init(int, std::shared_ptrtorch::autograd::ReadyQueue const&, bool) + 0x89 (0x7f9966eb2579 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
frame #13: torch::autograd::python::PythonEngine::thread_init(int, std::shared_ptrtorch::autograd::ReadyQueue const&, bool) + 0x4a (0x7f996b1e199a in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
frame #14: + 0xc819d (0x7f99aa6fa19d in /opt/conda/bin/../lib/libstdc++.so.6)
frame #15: + 0x76db (0x7f99adb356db in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #16: clone + 0x3f (0x7f99ad85e88f in /lib/x86_64-linux-gnu/libc.so.6)

To Reproduce

Steps to reproduce the behavior:

  1. I used custom lambda layer before batch

Expected behavior

Environment

In ubuntu, CUDA 10.1, python 3.7
Please copy and paste the output from our
environment collection script
(or fill out the checklist below manually).

You can get the script and run it with:

wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
  • PyTorch Version (e.g., 1.0):
  • OS (e.g., Linux):
  • How you installed PyTorch (conda, pip, source):
  • Build command you used (if compiling from source):
  • Python version:
  • CUDA/cuDNN version:
  • GPU models and configuration:
  • Any other relevant information:

Additional context

cc @ngimel @csarofeen @ptrblck @xwang233

Metadata

Metadata

Assignees

No one assigned

    Labels

    module: cudaRelated to torch.cuda, and CUDA support in generaltriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions