Skip to content

GPU test failure: test_dynamo.py #8952

@tengyifei

Description

@tengyifei

This is confusing to contributors because they assume their PR broke things

FAIL: test_einsum1 (initialize_on_cuda=True, backend=<function dynamo_backend at 0x7fb7a1018c10>) (__main__.DynamoInferenceBasicTest)
DynamoInferenceBasicTest.test_einsum1 (initialize_on_cuda=True, backend=<function dynamo_backend at 0x7fb7a1018c10>)
test_einsum(initialize_on_cuda=True, backend=<function dynamo_backend at 0x7fb7a1018c10>)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/absl/testing/parameterized.py", line 321, in bound_param_test
    return test_method(self, **testcase_params)
  File "/__w/xla/xla/pytorch/xla/test/dynamo/test_dynamo.py", line 327, in test_einsum
    res_device_dynamo = dynamo_einsum_mm(a, b)
  File "/usr/local/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 658, in _fn
    return fn(*args, **kwargs)
  File "/__w/xla/xla/pytorch/xla/test/dynamo/test_dynamo.py", line 318, in einsum_mm
    def einsum_mm(a, b):
  File "/usr/local/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 850, in _fn
    return fn(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 933, in returned_function
    compiled_fn, _ = create_aot_dispatcher_function(
  File "/usr/local/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 574, in create_aot_dispatcher_function
    return _create_aot_dispatcher_function(
  File "/usr/local/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 824, in _create_aot_dispatcher_function
    compiled_fn, fw_metadata = compiler_fn(
  File "/usr/local/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 219, in aot_dispatch_base
    compiled_fw = compiler(fw_module, updated_flat_args)
  File "/usr/local/lib/python3.10/site-packages/torch_xla/_dynamo/dynamo_backend2.py", line 47, in _dynamo_backend
    computation = xb.jax_func_to_xla_computation(
  File "/usr/local/lib/python3.10/site-packages/torch_xla/core/xla_builder.py", line 882, in jax_func_to_xla_computation
    sample_inputs = tuple(abstractify(a) for a in flattened_inputs)
  File "/usr/local/lib/python3.10/site-packages/torch_xla/core/xla_builder.py", line 882, in <genexpr>
    sample_inputs = tuple(abstractify(a) for a in flattened_inputs)
  File "/usr/local/lib/python3.10/site-packages/torch_xla/core/xla_builder.py", line 877, in abstractify
    assert a.device == torch_xla.device(
AssertionError: Inputs must be XLA tensors. Got cuda:0

======================================================================
FAIL: test_resnet181 (initialize_on_cuda=True, backend=<function dynamo_backend at 0x7fb7a1018c10>) (__main__.DynamoInferenceBasicTest)
DynamoInferenceBasicTest.test_resnet181 (initialize_on_cuda=True, backend=<function dynamo_backend at 0x7fb7a1018c10>)
test_resnet18(initialize_on_cuda=True, backend=<function dynamo_backend at 0x7fb7a1018c10>)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/absl/testing/parameterized.py", line 321, in bound_param_test
    return test_method(self, **testcase_params)
  File "/__w/xla/xla/pytorch/xla/test/dynamo/test_dynamo.py", line 392, in test_resnet18
    output = dynamo_resnet18(data)
  File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 658, in _fn
    return fn(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/torchvision/models/resnet.py", line 284, in forward
    def forward(self, x: Tensor) -> Tensor:
  File "/usr/local/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 850, in _fn
    return fn(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 933, in returned_function
    compiled_fn, _ = create_aot_dispatcher_function(
  File "/usr/local/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 574, in create_aot_dispatcher_function
    return _create_aot_dispatcher_function(
  File "/usr/local/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 824, in _create_aot_dispatcher_function
    compiled_fn, fw_metadata = compiler_fn(
  File "/usr/local/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 1107, in aot_dispatch_autograd
    compiled_fw_func = aot_config.fw_compiler(fw_module, adjusted_flat_args)
  File "/usr/local/lib/python3.10/site-packages/torch_xla/_dynamo/dynamo_backend2.py", line 47, in _dynamo_backend
    computation = xb.jax_func_to_xla_computation(
  File "/usr/local/lib/python3.10/site-packages/torch_xla/core/xla_builder.py", line 882, in jax_func_to_xla_computation
    sample_inputs = tuple(abstractify(a) for a in flattened_inputs)
  File "/usr/local/lib/python3.10/site-packages/torch_xla/core/xla_builder.py", line 882, in <genexpr>
    sample_inputs = tuple(abstractify(a) for a in flattened_inputs)
  File "/usr/local/lib/python3.10/site-packages/torch_xla/core/xla_builder.py", line 877, in abstractify
    assert a.device == torch_xla.device(
AssertionError: Inputs must be XLA tensors. Got cuda:0

======================================================================
FAIL: test_simple_model_with_in_place_ops1 (initialize_on_cuda=True, backend=<function dynamo_backend at 0x7fb7a1018c10>) (__main__.DynamoInferenceBasicTest)
DynamoInferenceBasicTest.test_simple_model_with_in_place_ops1 (initialize_on_cuda=True, backend=<function dynamo_backend at 0x7fb7a1018c10>)
test_simple_model_with_in_place_ops(initialize_on_cuda=True, backend=<function dynamo_backend at 0x7fb7a1018c10>)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/absl/testing/parameterized.py", line 321, in bound_param_test
    return test_method(self, **testcase_params)
  File "/__w/xla/xla/pytorch/xla/test/dynamo/test_dynamo.py", line 305, in test_simple_model_with_in_place_ops
    res_device_dynamo = compiled_model.forward(
  File "/usr/local/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 658, in _fn
    return fn(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
    return forward_call(*args, **kwargs)
  File "/__w/xla/xla/pytorch/xla/test/dynamo/test_dynamo.py", line 280, in forward
    def forward(self, index, copy_tensor, input_tensor, op_name):
  File "/usr/local/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 850, in _fn
    return fn(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 933, in returned_function
    compiled_fn, _ = create_aot_dispatcher_function(
  File "/usr/local/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 574, in create_aot_dispatcher_function
    return _create_aot_dispatcher_function(
  File "/usr/local/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 824, in _create_aot_dispatcher_function
    compiled_fn, fw_metadata = compiler_fn(
  File "/usr/local/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 219, in aot_dispatch_base
    compiled_fw = compiler(fw_module, updated_flat_args)
  File "/usr/local/lib/python3.10/site-packages/torch_xla/_dynamo/dynamo_backend2.py", line 47, in _dynamo_backend
    computation = xb.jax_func_to_xla_computation(
  File "/usr/local/lib/python3.10/site-packages/torch_xla/core/xla_builder.py", line 882, in jax_func_to_xla_computation
    sample_inputs = tuple(abstractify(a) for a in flattened_inputs)
  File "/usr/local/lib/python3.10/site-packages/torch_xla/core/xla_builder.py", line 882, in <genexpr>
    sample_inputs = tuple(abstractify(a) for a in flattened_inputs)
  File "/usr/local/lib/python3.10/site-packages/torch_xla/core/xla_builder.py", line 877, in abstractify
    assert a.device == torch_xla.device(
AssertionError: Inputs must be XLA tensors. Got cuda:0

----------------------------------------------------------------------
Ran 40 tests in 163.975s

FAILED (failures=3)

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions