🐛 Bug
for more detail check comments in pytorch/pytorch#88449
I was using
TF_CPP_MIN_LOG_LEVEL=0 TF_CPP_VMODULE="pjrt_computation_client=5" python benchmarks/dynamo/torchbench.py --performance --training --use-xla-baseline --randomize-input --backend=torchxla_trace_once --only mobilenet_v2 2>&1 | tee /tmp/dynamo.txt
on shunting's branch with some debugging flag, I am able to see
device data with handle 93913242061056 does not have info
if I search for this handle, it is in the very beginning
2022-11-08 01:31:33.305976: I 86150 tensorflow/compiler/xla/service/service.cc:173] XLA service 0x556968434f00 initialized for platform TPU (this does not guarantee that XLA will be used). Devices:
2022-11-08 01:31:33.306071: I 86150 tensorflow/compiler/xla/service/service.cc:181] StreamExecutor device (0): TPU, 2a886c8
2022-11-08 01:31:33.306078: I 86150 tensorflow/compiler/xla/service/service.cc:181] StreamExecutor device (1): TPU, 2a886c8
2022-11-08 01:31:33.306083: I 86150 tensorflow/compiler/xla/service/service.cc:181] StreamExecutor device (2): TPU, 2a886c8
2022-11-08 01:31:33.306090: I 86150 tensorflow/compiler/xla/service/service.cc:181] StreamExecutor device (3): TPU, 2a886c8
2022-11-08 01:31:33.331434: I 86150 ./tensorflow/compiler/xla/xla_client/pjrt_computation_client.h:133] create data with handle 93913242061056
which makes me believe this is the random seed ir.
FYI @shunting314 @wconstab
🐛 Bug
for more detail check comments in pytorch/pytorch#88449
I was using
on shunting's branch with some debugging flag, I am able to see
if I search for this handle, it is in the very beginning
which makes me believe this is the random seed ir.
FYI @shunting314 @wconstab