Skip to content

Cpuinfo crashes on AWS lambda aarch64 #113568

@michal-sankot

Description

@michal-sankot

Pytorch crashes when run on AWS lambda with aarch64 (arm) architecture.

It used to be patched by #16107 in Pytorch 1.x but it no longer works in 2.x

Error in cpuinfo: failed to parse the list of possible processors in /sys/devices/system/cpu/possible
Error in cpuinfo: failed to parse the list of present processors in /sys/devices/system/cpu/present
Error in cpuinfo: failed to parse both lists of possible and present processors

Issue is reported on cpuinfo repo too pytorch/cpuinfo#143, but got no solution yet.

Versions

PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A

OS: Amazon Linux 2 (aarch64)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.26

Python version: 3.9.18 (main, Oct 20 2023, 09:55:31) [GCC 7.3.1 20180712 (Red Hat 7.3.1-17)] (64-bit runtime)
Python platform: Linux-4.14.255-327-266.539.amzn2.aarch64-aarch64-with-glibc2.26
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
lscpu: failed to determine number of CPUs: /sys/devices/system/cpu/possible: No such file or directory

Versions of relevant libraries:
[pip3] No relevant packages
[conda] Could not collect

cc @seemethere @malfet @osalpekar @atalman @snadampal

Metadata

Metadata

Assignees

Labels

module: armRelated to ARM architectures builds of PyTorch. Includes Apple M1module: binariesAnything related to official binaries that we release to userstriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

Type

No type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions