Skip to content

Commit 0ec2708

Browse files
wconstabfacebook-github-bot
authored andcommitted
Add torch::deploy, an embedded torch-python interpreter (#50458)
Summary: Pull Request resolved: #50458 libinterpreter.so contains a frozen python distribution including torch-python bindings. Freezing refers to serializing bytecode of python standard library modules as well as the torch python library and embedding them in the library code. This library can then be dlopened multiple times in one process context, each interpreter having its own python state and GIL. In addition, each python environment is sealed off from the filesystem and can only import the frozen modules included in the distribution. This change relies on newly added frozenpython, a cpython 3.8.6 fork built for this purpose. Frozenpython provides libpython3.8-frozen.a which contains frozen bytecode and object code for the python standard library. Building on top of frozen python, the frozen torch-python bindings are added in this diff, providing each embedded interpreter with a copy of the torch bindings. Each interpreter is intended to share one instance of libtorch and the underlying tensor libraries. Known issues - Autograd is not expected to work with the embedded interpreter currently, as it manages its own python interactions and needs to coordinate with the duplicated python states in each of the interpreters. - Distributed and cuda stuff is disabled in libinterpreter.so build, needs to be revisited - __file__ is not supported in the context of embedded python since there are no files for the underlying library modules. using __file__ - __version__ is not properly supported in the embedded torch-python, just a workaround for now Test Plan: tested locally and on CI with cmake and buck builds running torch::deploy interpreter_test Reviewed By: ailzhang Differential Revision: D25850783 fbshipit-source-id: 174768eb20113183840b7b784f5ea70700efbe32
1 parent 47f0bda commit 0ec2708

27 files changed

Lines changed: 1033 additions & 13 deletions

.github/workflows/lint.yml

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -170,6 +170,8 @@ jobs:
170170
# FunctionsManual.cpp is excluded to keep this diff clean. It will be fixed
171171
# in a follow up PR.
172172
# /torch/csrc/generic/*.cpp is excluded because those files aren't actually built.
173+
# deploy/interpreter files are excluded due to using macros and other techniquies
174+
# that are not easily converted to accepted c++
173175
python tools/clang_tidy.py \
174176
--verbose \
175177
--paths torch/csrc/ \
@@ -186,6 +188,10 @@ jobs:
186188
-g"-torch/csrc/autograd/FunctionsManual.cpp" \
187189
-g"-torch/csrc/generic/*.cpp" \
188190
-g"-torch/csrc/jit/codegen/cuda/runtime/*" \
191+
-g"-torch/csrc/deploy/interpreter/interpreter.cpp" \
192+
-g"-torch/csrc/deploy/interpreter/interpreter.h" \
193+
-g"-torch/csrc/deploy/interpreter/interpreter_impl.h" \
194+
-g"-torch/csrc/deploy/interpreter/test_main.cpp" \
189195
"$@" > ${GITHUB_WORKSPACE}/clang-tidy-output.txt
190196
191197
cat ${GITHUB_WORKSPACE}/clang-tidy-output.txt

.gitignore

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -66,6 +66,9 @@ torch/csrc/autograd/generated/*
6666
torch/testing/_internal/generated/annotated_fn_args.py
6767
torch/testing/_internal/data/*.pt
6868
torch/csrc/cudnn/cuDNN.cpp
69+
torch/csrc/deploy/interpreter/cpython
70+
torch/csrc/deploy/interpreter/frozen
71+
torch/csrc/deploy/interpreter/third_party/typing_extensions.py
6972
torch/csrc/generated
7073
torch/csrc/generic/TensorMethods.cpp
7174
torch/csrc/jit/generated/*

.jenkins/pytorch/build.sh

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,17 @@ if [[ "$BUILD_ENVIRONMENT" == *-mobile-code-analysis* ]]; then
2323
exec "$(dirname "${BASH_SOURCE[0]}")/build-mobile-code-analysis.sh" "$@"
2424
fi
2525

26+
if [[ "$BUILD_ENVIRONMENT" == *linux-xenial-cuda10.2-cudnn7-py3-gcc7* ]]; then
27+
# Enabling DEPLOY build (embedded torch python interpreter, experimental)
28+
# only on one config for now, can expand later
29+
export USE_DEPLOY=ON
30+
31+
# Deploy feature builds cpython. It requires these packages.
32+
# TODO move this to dockerfile?
33+
sudo apt-get -qq update
34+
sudo apt-get -qq install libffi-dev libbz2-dev libreadline-dev libncurses5-dev libncursesw5-dev libgdbm-dev libsqlite3-dev uuid-dev tk-dev
35+
fi
36+
2637
echo "Python version:"
2738
python --version
2839

.jenkins/pytorch/test.sh

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -354,6 +354,11 @@ test_vec256() {
354354
fi
355355
}
356356

357+
test_torch_deploy() {
358+
SIMPLE_MODEL_PATH=torch/csrc/deploy/example/simple.pt LIBINTERPRETER_PATH=build/lib/libinterpreter.so build/bin/interpreter_test
359+
assert_git_not_dirty
360+
}
361+
357362
if ! [[ "${BUILD_ENVIRONMENT}" == *libtorch* || "${BUILD_ENVIRONMENT}" == *-bazel-* ]]; then
358363
(cd test && python -c "import torch; print(torch.__config__.show())")
359364
(cd test && python -c "import torch; print(torch.__config__.parallel_info())")
@@ -371,6 +376,9 @@ elif [[ "${BUILD_ENVIRONMENT}" == *libtorch* ]]; then
371376
# TODO: run some C++ tests
372377
echo "no-op at the moment"
373378
elif [[ "${BUILD_ENVIRONMENT}" == *-test1 || "${JOB_BASE_NAME}" == *-test1 ]]; then
379+
if [[ "${BUILD_ENVIRONMENT}" == pytorch-linux-xenial-cuda10.2-cudnn7-py3-gcc7-test1 ]]; then
380+
test_torch_deploy
381+
fi
374382
install_torchvision
375383
test_python_shard1
376384
elif [[ "${BUILD_ENVIRONMENT}" == *-test2 || "${JOB_BASE_NAME}" == *-test2 ]]; then

CMakeLists.txt

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -919,3 +919,8 @@ endif()
919919

920920
include(cmake/Summary.cmake)
921921
caffe2_print_configuration_summary()
922+
923+
# ---[ Torch Deploy
924+
if(USE_DEPLOY)
925+
add_subdirectory(torch/csrc/deploy)
926+
endif()

torch/__init__.py

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,11 @@
2121
from ._utils import _import_dotted_name
2222
from ._utils_internal import get_file_path, prepare_multiprocessing_environment, \
2323
USE_RTLD_GLOBAL_WITH_LIBTORCH, USE_GLOBAL_DEPS
24-
from .version import __version__
24+
# TODO(torch_deploy) figure out how to freeze version.py in fbcode build
25+
if sys.executable == 'torch_deploy':
26+
__version__ = "torch-deploy-1.8"
27+
else:
28+
from .version import __version__
2529
from ._six import string_classes as _string_classes
2630

2731
from typing import Set, Type, TYPE_CHECKING
@@ -132,7 +136,7 @@
132136

133137
# See Note [Global dependencies]
134138
def _load_global_deps():
135-
if platform.system() == 'Windows':
139+
if platform.system() == 'Windows' or sys.executable == 'torch_deploy':
136140
return
137141

138142
lib_name = 'libtorch_global_deps' + ('.dylib' if platform.system() == 'Darwin' else '.so')
@@ -494,7 +498,7 @@ class QUInt4x2Storage(_C.QUInt4x2StorageBase, _StorageBase):
494498
################################################################################
495499

496500
def manager_path():
497-
if platform.system() == 'Windows':
501+
if platform.system() == 'Windows' or sys.executable == 'torch_deploy':
498502
return b""
499503
path = get_file_path('torch', 'bin', 'torch_shm_manager')
500504
prepare_multiprocessing_environment(get_file_path('torch'))

torch/_ops.py

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,6 @@
22

33
import contextlib
44
import ctypes
5-
import os
65
import sys
76
import types
87

@@ -67,7 +66,7 @@ def __getattr__(self, op_name):
6766
return op
6867

6968
class _Ops(types.ModuleType):
70-
__file__ = os.path.join(os.path.dirname(__file__), '_ops.py')
69+
__file__ = '_ops.py'
7170

7271
def __init__(self):
7372
super(_Ops, self).__init__('torch.ops')

torch/_utils_internal.py

Lines changed: 11 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,18 +1,24 @@
11

22
import os
33
import inspect
4+
import sys
45
import tempfile
56

67
# this arbitrary-looking assortment of functionality is provided here
78
# to have a central place for overrideable behavior. The motivating
89
# use is the FB build environment, where this source file is replaced
910
# by an equivalent.
1011

11-
if os.path.basename(os.path.dirname(__file__)) == 'shared':
12-
torch_parent = os.path.dirname(os.path.dirname(os.path.dirname(__file__)))
12+
if sys.executable == 'torch_deploy':
13+
# __file__ is meaningless in the context of frozen torch used in torch deploy.
14+
# setting empty torch_parent should allow below functions to operate without crashing,
15+
# but it's unclear if there is a valid use case for them in the context of deploy.
16+
torch_parent = ""
1317
else:
14-
torch_parent = os.path.dirname(os.path.dirname(__file__))
15-
18+
if os.path.basename(os.path.dirname(__file__)) == 'shared':
19+
torch_parent = os.path.dirname(os.path.dirname(os.path.dirname(__file__)))
20+
else:
21+
torch_parent = os.path.dirname(os.path.dirname(__file__))
1622

1723
def get_file_path(*path_components):
1824
return os.path.join(torch_parent, *path_components)
@@ -60,7 +66,7 @@ def get_source_lines_and_file(obj, error_msg=None):
6066

6167
TEST_MASTER_ADDR = '127.0.0.1'
6268
TEST_MASTER_PORT = 29500
63-
# USE_GLOBAL_DEPS controls whether __init__.py tries to load
69+
# USE_GLOBAL_DEPS controls whether __init__.py tries to load
6470
# libtorch_global_deps, see Note [Global dependencies]
6571
USE_GLOBAL_DEPS = True
6672
# USE_RTLD_GLOBAL_WITH_LIBTORCH controls whether __init__.py tries to load

torch/csrc/Module.cpp

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -692,6 +692,8 @@ extern "C"
692692
#ifdef _WIN32
693693
__declspec(dllexport)
694694
#endif
695+
TORCH_API PyObject* initModule();
696+
// separate decl and defn for msvc error C2491
695697
PyObject* initModule() {
696698
HANDLE_TH_ERRORS
697699
at::internal::lazy_init_num_threads();

torch/csrc/deploy/.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
example/generated/*

0 commit comments

Comments
 (0)