Remove unnecessary whitespace in complex tensors#36331
Remove unnecessary whitespace in complex tensors#36331choidongyeon wants to merge 9 commits intopytorch:masterfrom
Conversation
💊 CircleCI build failures summary and remediationsAs of commit 972eb13 (more details on the Dr. CI page):
XLA failureJob pytorch_xla_linux_xenial_py3_6_clang7_test is failing. Please create an issue with title prefixed by 🚧 1 upstream failure:These were probably caused by upstream breakages:
This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions on the GitHub issue tracker. This comment has been revised 21 times. |
|
I just saw that this PR leads to this: It should instead output |
|
also we should move this check inside format function within complex branch and not pass |
|
@anjali411 Thanks for pointing these out. Will work on them later today. |
Addressed this in most recent commit. Will fix the second one in a bit. |
This was a great suggestion. The only problem is that |
| for value in tensor_view]) | ||
| for value in tensor_view: | ||
| value_str = '{}'.format(value) | ||
| if self.complex_dtype and self.complex_with_decimal: |
There was a problem hiding this comment.
if self.complex_dtype
if self.complex_with_decimal:
value_str = ('{{:.{}f}}').format(PRINT_OPTS.precision).format(value)
else:
value_str = "{:.0f}".format(value.item())
There was a problem hiding this comment.
attribute name is okay though?
There was a problem hiding this comment.
I think we should change it to has_non_zero_decimal_val as mentioned here :D
| tensor_view = tensor.reshape(-1) | ||
|
|
||
| if not self.floating_dtype: | ||
| self.complex_with_decimal = False |
There was a problem hiding this comment.
I think we should change it to has_non_zero_decimal_val and perhaps add a comment that it's only used for complex
|
@choidongyeon looks good overall. we should follow numpy and remove the extra space between real and imag values |
Easy peasy. Updated the PR, also updated the PR summary with some sample current outputs. |
anjali411
left a comment
There was a problem hiding this comment.
great job! thanks for working on this :D
facebook-github-bot
left a comment
There was a problem hiding this comment.
@anjali411 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
@anjali411 Thanks for being so responsive! |
of course! let me know if you'd like to work on other issues related to complex numbers. yeah I realized it was something else and so I removed it |
|
@anjali411 merged this pull request in 2f5b523. |
Summary: This PR addresses Issue pytorch#36279. Previously, printing of complex tensors would sometimes yield extra spaces before the elements as shown below: ``` print(torch.tensor([[1 + 1.340j, 3 + 4j], [1.2 + 1.340j, 6.5 + 7j]], dtype=torch.complex64)) ``` would yield ``` tensor([[(1.0000 + 1.3400j), (3.0000 + 4.0000j)], [(1.2000 + 1.3400j), (6.5000 + 7.0000j)]], dtype=torch.complex64) ``` This occurs primarily because when the max width for the element is being assigned, the formatter's max_width is calculated prior to truncating the float values. As a result, ```self.max_width``` would end up being much longer than the final length of the element string to be printed. I address this by adding a boolean variable that checks if a complex tensor contains only ints and change the control flow for calculating ```self.max_width``` accordingly. Here are some sample outputs of both float and complex tensors: ``` tensor([[0., 0.], [0., 0.]], dtype=torch.float64) tensor([[(0.+0.j), (0.+0.j)], [(0.+0.j), (0.+0.j)]], dtype=torch.complex64) tensor([1.2000, 1.3400], dtype=torch.float64) tensor([(1.2000+1.3400j)], dtype=torch.complex64) tensor([[(1.0000+1.3400j), (3.0000+4.0000j)], [(1.2000+1.3400j), (6.5000+7.0000j)]], dtype=torch.complex64) tensor([1.0000, 2.0000, 3.0000, 4.5000]) tensor([(1.+2.j)], dtype=torch.complex64) ``` cc ezyang anjali411 dylanbespalko Pull Request resolved: pytorch#36331 Differential Revision: D20955663 Pulled By: anjali411 fbshipit-source-id: c26a651eb5c9db6fcc315ad8d5c1bd9f4b4708f7
Summary: This PR addresses Issue pytorch#36279. Previously, printing of complex tensors would sometimes yield extra spaces before the elements as shown below: ``` print(torch.tensor([[1 + 1.340j, 3 + 4j], [1.2 + 1.340j, 6.5 + 7j]], dtype=torch.complex64)) ``` would yield ``` tensor([[(1.0000 + 1.3400j), (3.0000 + 4.0000j)], [(1.2000 + 1.3400j), (6.5000 + 7.0000j)]], dtype=torch.complex64) ``` This occurs primarily because when the max width for the element is being assigned, the formatter's max_width is calculated prior to truncating the float values. As a result, ```self.max_width``` would end up being much longer than the final length of the element string to be printed. I address this by adding a boolean variable that checks if a complex tensor contains only ints and change the control flow for calculating ```self.max_width``` accordingly. Here are some sample outputs of both float and complex tensors: ``` tensor([[0., 0.], [0., 0.]], dtype=torch.float64) tensor([[(0.+0.j), (0.+0.j)], [(0.+0.j), (0.+0.j)]], dtype=torch.complex64) tensor([1.2000, 1.3400], dtype=torch.float64) tensor([(1.2000+1.3400j)], dtype=torch.complex64) tensor([[(1.0000+1.3400j), (3.0000+4.0000j)], [(1.2000+1.3400j), (6.5000+7.0000j)]], dtype=torch.complex64) tensor([1.0000, 2.0000, 3.0000, 4.5000]) tensor([(1.+2.j)], dtype=torch.complex64) ``` cc ezyang anjali411 dylanbespalko Pull Request resolved: pytorch#36331 Differential Revision: D20955663 Pulled By: anjali411 fbshipit-source-id: c26a651eb5c9db6fcc315ad8d5c1bd9f4b4708f7
This PR addresses Issue #36279.
Previously, printing of complex tensors would sometimes yield extra spaces before the elements as shown below:
would yield
This occurs primarily because when the max width for the element is being assigned, the formatter's max_width is calculated prior to truncating the float values. As a result,
self.max_widthwould end up being much longer than the final length of the element string to be printed.I address this by adding a boolean variable that checks if a complex tensor contains only ints and change the control flow for calculating
self.max_widthaccordingly.Here are some sample outputs of both float and complex tensors:
cc @ezyang @anjali411 @dylanbespalko