Skip to content

Commit 8c798e0

Browse files
samestepfacebook-github-bot
authored andcommitted
Forbid trailing whitespace (#53406)
Summary: Context: #53299 (comment) These are the only hand-written parts of this diff: - the addition to `.github/workflows/lint.yml` - the file endings changed in these four files (to appease FB-internal land-blocking lints): - `GLOSSARY.md` - `aten/src/ATen/core/op_registration/README.md` - `scripts/README.md` - `torch/csrc/jit/codegen/fuser/README.md` The rest was generated by running this command (on macOS): ``` git grep -I -l ' $' -- . ':(exclude)**/contrib/**' ':(exclude)third_party' | xargs gsed -i 's/ *$//' ``` I looked over the auto-generated changes and didn't see anything that looked problematic. Pull Request resolved: #53406 Test Plan: This run (after adding the lint but before removing existing trailing spaces) failed: - https://github.com/pytorch/pytorch/runs/2043032377 This run (on the tip of this PR) succeeded: - https://github.com/pytorch/pytorch/runs/2043296348 Reviewed By: walterddr, seemethere Differential Revision: D26856620 Pulled By: samestep fbshipit-source-id: 3f0de7f7c2e4b0f1c089eac9b5085a58dd7e0d97
1 parent cab2689 commit 8c798e0

238 files changed

Lines changed: 799 additions & 798 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

.circleci/scripts/binary_ios_test.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,6 @@ rm cert.txt
2424
if ! [ -x "$(command -v xcodebuild)" ]; then
2525
echo 'Error: xcodebuild is not installed.'
2626
exit 1
27-
fi
27+
fi
2828
PROFILE=PyTorch_CI_2021
2929
ruby ${PROJ_ROOT}/scripts/xcode_build.rb -i ${PROJ_ROOT}/build_ios/install -x ${PROJ_ROOT}/ios/TestApp/TestApp.xcodeproj -p ${IOS_PLATFORM} -c ${PROFILE} -t ${IOS_DEV_TEAM_ID}

.github/workflows/lint.yml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -40,6 +40,9 @@ jobs:
4040
rm -r "shellcheck-${scversion}"
4141
shellcheck --version
4242
.jenkins/run-shellcheck.sh
43+
- name: Ensure no trailing spaces
44+
run: |
45+
(! git grep -I -l ' $' -- . ':(exclude)**/contrib/**' ':(exclude)third_party' || (echo "The above files have trailing spaces; please remove them"; false))
4346
- name: Ensure no tabs
4447
run: |
4548
(! git grep -I -l $'\t' -- . ':(exclude)*.svg' ':(exclude)**Makefile' ':(exclude)**/contrib/**' ':(exclude)third_party' ':(exclude).gitattributes' ':(exclude).gitmodules' || (echo "The above files have tabs; please convert them to spaces"; false))

.jenkins/caffe2/bench.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ if (( $num_gpus == 0 )); then
2121
fi
2222
if (( $num_gpus >= 1 )); then
2323
"$PYTHON" "$caffe2_pypath/python/examples/imagenet_trainer.py" --train_data null --batch_size 128 --epoch_size 12800 --num_epochs 2 --num_gpus 1
24-
# Let's skip the fp16 bench runs for now, as it recompiles the miopen kernels and can take 10+min to run.
24+
# Let's skip the fp16 bench runs for now, as it recompiles the miopen kernels and can take 10+min to run.
2525
# We can resume when we (1) bindmount the miopen cache folder in jenkins; (2) install the pre-compiled miopen kernel library in the docker
2626
# "$PYTHON" "$caffe2_pypath/python/examples/imagenet_trainer.py" --train_data null --batch_size 256 --epoch_size 25600 --num_epochs 2 --num_gpus 1 --float16_compute --dtype float16
2727
fi

CONTRIBUTING.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -159,7 +159,7 @@ with `brew install cmake` if you are developing on MacOS or Linux system.
159159
check whether your Git local or global config file contains any `submodule.*` settings. If yes, remove them and try again.
160160
(please reference [this doc](https://git-scm.com/docs/git-config#Documentation/git-config.txt-submoduleltnamegturl) for more info).
161161
162-
- If you encountered error such as
162+
- If you encountered error such as
163163
```
164164
fatal: unable to access 'https://github.com/pybind11/pybind11.git': could not load PEM client certificate ...
165165
```
@@ -169,11 +169,11 @@ with `brew install cmake` if you are developing on MacOS or Linux system.
169169
openssl x509 -noout -in <cert_file> -dates
170170
```
171171
172-
- If you encountered error that some third_party modules are not checkout correctly, such as
172+
- If you encountered error that some third_party modules are not checkout correctly, such as
173173
```
174174
Could not find .../pytorch/third_party/pybind11/CMakeLists.txt
175175
```
176-
remove any `submodule.*` settings in your local git config (`.git/config` of your pytorch repo) and try again.
176+
remove any `submodule.*` settings in your local git config (`.git/config` of your pytorch repo) and try again.
177177
178178
## Nightly Checkout & Pull
179179

GLOSSARY.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# PyTorch Glossary
1+
# PyTorch Glossary
22

33
- [PyTorch Glossary](#pytorch-glossary)
44
- [Operation and Kernel](#operation-and-kernel)
@@ -39,7 +39,7 @@ For example, this
3939
to create Custom Operations.
4040

4141
## Kernel
42-
Implementation of a PyTorch operation, specifying what should be done when an
42+
Implementation of a PyTorch operation, specifying what should be done when an
4343
operation executes.
4444

4545
## Compound Operation
@@ -57,7 +57,7 @@ Same as Compound Operation.
5757
## Leaf Operation
5858
An operation that's considered a basic operation, as opposed to a Compound
5959
Operation. Leaf Operation always has dispatch functions defined, usually has a
60-
derivative function defined as well.
60+
derivative function defined as well.
6161

6262
## Device Kernel
6363
Device-specific kernel of a leaf operation.
@@ -79,4 +79,4 @@ using just-in-time compilation.
7979

8080
## Scripting
8181
Using `torch.jit.script` on a function to inspect source code and compile it as
82-
TorchScript code.
82+
TorchScript code.

aten/src/ATen/BatchingRegistrations.cpp

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -300,7 +300,7 @@ Tensor trace_backward_batching_rule(const Tensor& grad, IntArrayRef input_sizes)
300300
auto grad_input = at::zeros(grad_physical.getPhysicalShape(input_sizes), grad.options());
301301
// Batched Diagonal View
302302
auto grad_input_diag = at::diagonal(grad_input, /*offset*/0, /*dim1*/-2, /*dim2*/-1);
303-
// Append a dimension of size one to the grad output
303+
// Append a dimension of size one to the grad output
304304
auto grad_physical_tensor = grad_physical.tensor().unsqueeze(-1);
305305
grad_input_diag.copy_(grad_physical_tensor);
306306
return grad_physical.getPhysicalToLogicalMap().apply(grad_input);

aten/src/ATen/CPUGeneratorImpl.cpp

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ struct CPUGeneratorImplStateLegacy {
3838
* new data introduced in at::CPUGeneratorImpl and the legacy state. It is used
3939
* as a helper for torch.get_rng_state() and torch.set_rng_state()
4040
* functions.
41-
*/
41+
*/
4242
struct CPUGeneratorImplState {
4343
CPUGeneratorImplStateLegacy legacy_pod;
4444
float next_float_normal_sample;
@@ -119,7 +119,7 @@ uint64_t CPUGeneratorImpl::seed() {
119119
* must be a strided CPU byte tensor and of the same size as either
120120
* CPUGeneratorImplStateLegacy (for legacy CPU generator state) or
121121
* CPUGeneratorImplState (for new state).
122-
*
122+
*
123123
* FIXME: Remove support of the legacy state in the future?
124124
*/
125125
void CPUGeneratorImpl::set_state(const c10::TensorImpl& new_state) {

aten/src/ATen/SparseTensorUtils.h

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -94,7 +94,7 @@ TORCH_API Tensor flatten_indices(const Tensor& indices, IntArrayRef full_size, b
9494
// new_indices = [ 3, 1, 3 ] # uncoalesced
9595
TORCH_API Tensor flatten_indices_by_dims(const Tensor& indices, const IntArrayRef& sizes, const IntArrayRef& dims_to_flatten);
9696

97-
// Find the CSR representation for a row `indices` from the COO format
97+
// Find the CSR representation for a row `indices` from the COO format
9898
TORCH_API Tensor coo_to_csr(const int64_t* indices, int64_t dim, int64_t nnz);
9999

100100
}} // namespace at::sparse

aten/src/ATen/Version.cpp

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -114,7 +114,7 @@ std::string used_cpu_capability() {
114114
case native::CPUCapability::AVX2:
115115
ss << "AVX2";
116116
break;
117-
#endif
117+
#endif
118118
default:
119119
break;
120120
}

aten/src/ATen/VmapTransforms.h

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ using VmapDimVector = SmallVector<int64_t, kVmapStaticDimVecSize>;
4747
// argument.
4848

4949
// VmapTransform for operators that take tensors with multiple batch dims.
50-
// Given one or more logical views on Tensors, `logicalToPhysical`
50+
// Given one or more logical views on Tensors, `logicalToPhysical`
5151
// permutes all of the batch dims to the front of the tensor, aligns
5252
// and expands the batch dims to match each other (according to their `level`),
5353
// and returns a VmapPhysicalView on the tensor(s).

0 commit comments

Comments
 (0)