Skip to content

Commit 296fc8f

Browse files
committed
Update on "Rename positional and kwarg_only to have flat prefix"
I want the names positional and kwarg_only to give the unflat representation (e.g., preserving TensorOptionsArguments in the returned Union). So I regret my original naming choice when I moved grouping to model. This renames them to have flat_ prefix and also adds a flat_non_out argument for cases where you just want to look at non-out arguments. Signed-off-by: Edward Z. Yang <ezyang@fb.com> [ghstack-poisoned]
2 parents 5df758a + fa76d69 commit 296fc8f

121 files changed

Lines changed: 5315 additions & 1420 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

.github/workflows/lint.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -75,7 +75,7 @@ jobs:
7575
- name: Run flake8
7676
run: |
7777
set -eux
78-
pip install flake8==3.8.2 flake8-bugbear==20.1.4 flake8-comprehensions==3.3.0 flake8-executable==2.0.4 flake8-pyi==20.5.0 mccabe pycodestyle==2.6.0 pyflakes==2.2.0
78+
pip install -r requirements-flake8.txt
7979
flake8 --version
8080
flake8 | tee ${GITHUB_WORKSPACE}/flake8-output.txt
8181
- name: Add annotations

.jenkins/caffe2/build.sh

Lines changed: 0 additions & 43 deletions
Original file line numberDiff line numberDiff line change
@@ -18,49 +18,6 @@ build_to_cmake () {
1818

1919

2020
SCCACHE="$(which sccache)"
21-
if [ "$(which gcc)" != "/root/sccache/gcc" ]; then
22-
# Setup SCCACHE
23-
###############################################################################
24-
# Setup sccache if SCCACHE_BUCKET is set
25-
if [ -n "${SCCACHE_BUCKET}" ]; then
26-
mkdir -p ./sccache
27-
28-
SCCACHE="$(which sccache)"
29-
if [ -z "${SCCACHE}" ]; then
30-
echo "Unable to find sccache..."
31-
exit 1
32-
fi
33-
34-
# Setup wrapper scripts
35-
wrapped="cc c++ gcc g++ x86_64-linux-gnu-gcc"
36-
if [[ "${BUILD_ENVIRONMENT}" == *-cuda* ]]; then
37-
wrapped="$wrapped nvcc"
38-
fi
39-
for compiler in $wrapped; do
40-
(
41-
echo "#!/bin/sh"
42-
43-
# TODO: if/when sccache gains native support for an
44-
# SCCACHE_DISABLE flag analogous to ccache's CCACHE_DISABLE,
45-
# this can be removed. Alternatively, this can be removed when
46-
# https://github.com/pytorch/pytorch/issues/13362 is fixed.
47-
#
48-
# NOTE: carefully quoted - we want `which compiler` to be
49-
# resolved as we execute the script, but SCCACHE_DISABLE and
50-
# $@ to be evaluated when we execute the script
51-
echo 'test $SCCACHE_DISABLE && exec '"$(which $compiler)"' "$@"'
52-
53-
echo "exec $SCCACHE $(which $compiler) \"\$@\""
54-
) > "./sccache/$compiler"
55-
chmod +x "./sccache/$compiler"
56-
done
57-
58-
export CACHE_WRAPPER_DIR="$PWD/sccache"
59-
60-
# CMake must find these wrapper scripts
61-
export PATH="$CACHE_WRAPPER_DIR:$PATH"
62-
fi
63-
fi
6421

6522
# Setup ccache if configured to use it (and not sccache)
6623
if [ -z "${SCCACHE}" ] && which ccache > /dev/null; then

.jenkins/pytorch/build.sh

Lines changed: 2 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -1,19 +1,14 @@
11
#!/bin/bash
22

3+
set -ex
4+
35
# Required environment variable: $BUILD_ENVIRONMENT
46
# (This is set by default in the Docker images we build, so you don't
57
# need to set it yourself.
68

79
# shellcheck disable=SC2034
810
COMPACT_JOB_NAME="${BUILD_ENVIRONMENT}"
911

10-
# Temp: use new sccache
11-
if [[ -n "$IN_CI" && "$BUILD_ENVIRONMENT" == *rocm* ]]; then
12-
# Download customized sccache
13-
sudo curl --retry 3 http://repo.radeon.com/misc/.sccache_amd/sccache -o /opt/cache/bin/sccache
14-
sudo chmod 755 /opt/cache/bin/sccache
15-
fi
16-
1712
source "$(dirname "${BASH_SOURCE[0]}")/common.sh"
1813

1914
if [[ "$BUILD_ENVIRONMENT" == *-linux-xenial-py3-clang5-asan* ]]; then
@@ -124,32 +119,6 @@ if [[ "$BUILD_ENVIRONMENT" == *rocm* ]]; then
124119
export MAX_JOBS=$(($(nproc) - 1))
125120
fi
126121

127-
# ROCm CI is using Caffe2 docker images, which needs these wrapper
128-
# scripts to correctly use sccache.
129-
if [[ -n "${SCCACHE_BUCKET}" && -z "$IN_CI" ]]; then
130-
mkdir -p ./sccache
131-
132-
SCCACHE="$(which sccache)"
133-
if [ -z "${SCCACHE}" ]; then
134-
echo "Unable to find sccache..."
135-
exit 1
136-
fi
137-
138-
# Setup wrapper scripts
139-
for compiler in cc c++ gcc g++ clang clang++; do
140-
(
141-
echo "#!/bin/sh"
142-
echo "exec $SCCACHE $(which $compiler) \"\$@\""
143-
) > "./sccache/$compiler"
144-
chmod +x "./sccache/$compiler"
145-
done
146-
147-
export CACHE_WRAPPER_DIR="$PWD/sccache"
148-
149-
# CMake must find these wrapper scripts
150-
export PATH="$CACHE_WRAPPER_DIR:$PATH"
151-
fi
152-
153122
if [[ -n "$IN_CI" ]]; then
154123
# Set ROCM_ARCH to gfx900 and gfx906 for CI builds
155124
echo "Limiting PYTORCH_ROCM_ARCH to gfx90[06] for CI builds"

.travis.aten.yml

Lines changed: 0 additions & 31 deletions
This file was deleted.

CONTRIBUTING.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -891,7 +891,7 @@ which is in PyTorch's `requirements.txt`.
891891
## Pre-commit tidy/linting hook
892892

893893
We use clang-tidy and flake8 (installed with flake8-bugbear,
894-
flake8-comprehensions, flake8-mypy, and flake8-pyi) to perform additional
894+
flake8-comprehensions, flake8-pyi, and others) to perform additional
895895
formatting and semantic checking of code. We provide a pre-commit git hook for
896896
performing these checks, before a commit is created:
897897

aten/src/ATen/Config.h.in

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,7 @@
88

99
#define AT_MKLDNN_ENABLED() @AT_MKLDNN_ENABLED@
1010
#define AT_MKL_ENABLED() @AT_MKL_ENABLED@
11+
#define AT_FFTW_ENABLED() @AT_FFTW_ENABLED@
1112
#define AT_NNPACK_ENABLED() @AT_NNPACK_ENABLED@
1213
#define CAFFE2_STATIC_LINK_CUDA() @CAFFE2_STATIC_LINK_CUDA_INT@
1314
#define AT_BUILD_WITH_BLAS() @USE_BLAS@

aten/src/ATen/Context.cpp

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@
33
#include <ATen/Context.h>
44

55
#include <c10/core/TensorOptions.h>
6+
#include <c10/core/CPUAllocator.h>
67

78
#include <mutex>
89
#include <sstream>
@@ -232,7 +233,7 @@ bool Context::setFlushDenormal(bool on) {
232233
}
233234

234235
Allocator* getCPUAllocator() {
235-
return getTHDefaultAllocator();
236+
return c10::GetCPUAllocator();
236237
}
237238

238239
// override_allow_tf32_flag = true

aten/src/ATen/MemoryOverlap.cpp

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -75,4 +75,16 @@ void assert_no_partial_overlap(TensorImpl* a, TensorImpl* b) {
7575
"Please clone() the tensor before performing the operation.");
7676
}
7777

78+
void assert_no_overlap(const Tensor& a, const Tensor& b) {
79+
assert_no_overlap(a.unsafeGetTensorImpl(), b.unsafeGetTensorImpl());
80+
}
81+
82+
void assert_no_overlap(TensorImpl* a, TensorImpl* b) {
83+
const auto lap = get_overlap_status(a, b);
84+
TORCH_CHECK(lap != MemOverlapStatus::PARTIAL && lap != MemOverlapStatus::FULL,
85+
"unsupported operation: some elements of the input tensor and "
86+
"the written-to tensor refer to a single memory location. "
87+
"Please clone() the tensor before performing the operation.");
88+
}
89+
7890
}

aten/src/ATen/MemoryOverlap.h

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -27,4 +27,7 @@ CAFFE2_API MemOverlapStatus get_overlap_status(TensorImpl* a, TensorImpl* b);
2727
CAFFE2_API void assert_no_partial_overlap(const Tensor& a, const Tensor& b);
2828
void assert_no_partial_overlap(TensorImpl* a, TensorImpl* b);
2929

30+
CAFFE2_API void assert_no_overlap(const Tensor& a, const Tensor& b);
31+
CAFFE2_API void assert_no_overlap(TensorImpl* a, TensorImpl* b);
32+
3033
}

aten/src/ATen/SparseTensorImpl.cpp

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -46,6 +46,8 @@ SparseTensorImpl::SparseTensorImpl(at::DispatchKeySet key_set, const caffe2::Typ
4646
AT_ASSERT(values_.sizes() == IntArrayRef({0}));
4747
AT_ASSERT(values_.device() == indices_.device());
4848
AT_ASSERT(values_.device() == device());
49+
50+
is_non_overlapping_and_dense_ = false;
4951
}
5052

5153
IntArrayRef SparseTensorImpl::strides() const {

0 commit comments

Comments
 (0)