Skip to content

override gcc version in cuda related test#38675

Closed
glaringlee wants to merge 3 commits intogh/glaringlee/21/basefrom
gh/glaringlee/21/head
Closed

override gcc version in cuda related test#38675
glaringlee wants to merge 3 commits intogh/glaringlee/21/basefrom
gh/glaringlee/21/head

Conversation

@glaringlee
Copy link
Copy Markdown
Contributor

@glaringlee glaringlee commented May 18, 2020

Stack from ghstack:

This is to add ci test for cuda9.2 + gcc 5.4 combination.
We can not change gcc version for cuda related ci test, I added a cuda_gcc_override tag to override the gcc version in cuda related test

Differential Revision: D21626921

@glaringlee glaringlee requested a review from seemethere May 18, 2020 20:51
This is to add ci test for cuda9.2 + gcc 5.4 combination.
We can not change gcc version for cuda related ci test, I added a cuda_gcc_override tag to override the gcc version in cuda related test

Differential Revision: [D21626921](https://our.internmc.facebook.com/intern/diff/D21626921)

[ghstack-poisoned]
@dr-ci
Copy link
Copy Markdown

dr-ci Bot commented May 18, 2020

💊 CI failures summary and remediations

As of commit a6ac4b4 (more details on the Dr. CI page):


  • 1/1 failures possibly* introduced in this PR
    • 1/1 non-CircleCI failure(s)

ci.pytorch.org: 1 failed


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker.

See how this bot performed.

This comment has been revised 5 times.

This is to add ci test for cuda9.2 + gcc 5.4 combination.
We can not change gcc version for cuda related ci test, I added a cuda_gcc_override tag to override the gcc version in cuda related test

Differential Revision: [D21626921](https://our.internmc.facebook.com/intern/diff/D21626921)

[ghstack-poisoned]
glaringlee pushed a commit that referenced this pull request May 18, 2020
ghstack-source-id: e6e71db
Pull Request resolved: #38675
@facebook-github-bot
Copy link
Copy Markdown
Contributor

@glaringlee merged this pull request in 5e55f08.

@facebook-github-bot facebook-github-bot deleted the gh/glaringlee/21/head branch May 22, 2020 14:16
glaringlee pushed a commit that referenced this pull request May 27, 2020
This is to reland #38675, and test cpp_extension compatible in _test only, this is enough, the purpose of this test is to make sure pytorch and cpp extension are compatible with xenial + cuda 9.2 + gcc 5.4

Differential Revision: [D21731026](https://our.internmc.facebook.com/intern/diff/D21731026)

[ghstack-poisoned]
glaringlee pushed a commit that referenced this pull request May 27, 2020
This is to reland #38675, and test cpp_extension compatible in _test only, this is enough, the purpose of this test is to make sure pytorch and cpp extension are compatible with xenial + cuda 9.2 + gcc 5.4

Differential Revision: [D21731026](https://our.internmc.facebook.com/intern/diff/D21731026)

[ghstack-poisoned]
glaringlee pushed a commit that referenced this pull request May 27, 2020
This is to reland #38675, and test cpp_extension compatible in _test only, this is enough, the purpose of this test is to make sure pytorch and cpp extension are compatible with xenial + cuda 9.2 + gcc 5.4

Differential Revision: [D21731026](https://our.internmc.facebook.com/intern/diff/D21731026)

[ghstack-poisoned]
glaringlee pushed a commit that referenced this pull request May 27, 2020
This is to reland #38675, and test cpp_extension compatible in _test only, this is enough, the purpose of this test is to make sure pytorch and cpp extension are compatible with xenial + cuda 9.2 + gcc 5.4

There are two non gcc5.4 (+ cuda9.2) compatible change introduced recently:
#37849
#38627
which caused the following problems:
https://app.circleci.com/pipelines/github/pytorch/pytorch/173756/workflows/7445e169-9c26-4ec4-a23a-ff6160d155b1/jobs/5582207/steps
https://app.circleci.com/pipelines/github/pytorch/pytorch/173970/workflows/bf0de0f2-9156-4c8f-a097-53ca8e20d4b0/jobs/5589265/steps

The root cause is that gcc 5.4 does not support uniform initialization list well, it can not deduce a correct type in some cases. It probably bugs in the gcc 5 compiler,  I modified these code a little bit to make them compatible with cuda 9.2 + gcc 5.4.

People are still using xenial + gcc5.4 + cuda 9.x, this env should be covered until xenial is deprecated. 

Differential Revision: [D21731026](https://our.internmc.facebook.com/intern/diff/D21731026)

[ghstack-poisoned]
glaringlee pushed a commit that referenced this pull request May 27, 2020
This is to reland #38675, and test cpp_extension compatible in _test only, this is enough, the purpose of this test is to make sure pytorch and cpp extension are compatible with xenial + cuda 9.2 + gcc 5.4

There are two non gcc5.4 (+ cuda9.2) compatible change introduced recently:
#37849
#38627
which caused the following problems:
https://app.circleci.com/pipelines/github/pytorch/pytorch/173756/workflows/7445e169-9c26-4ec4-a23a-ff6160d155b1/jobs/5582207/steps
https://app.circleci.com/pipelines/github/pytorch/pytorch/173970/workflows/bf0de0f2-9156-4c8f-a097-53ca8e20d4b0/jobs/5589265/steps

The root cause is that gcc 5.4 does not support uniform initialization list well, it can not deduce a correct type in some cases. It probably bugs in the gcc 5 compiler,  I modified these code a little bit to make them compatible with cuda 9.2 + gcc 5.4.

People are still using xenial + gcc5.4 + cuda 9.x, this env should be covered until xenial is deprecated. 

Differential Revision: [D21731026](https://our.internmc.facebook.com/intern/diff/D21731026)

[ghstack-poisoned]
glaringlee pushed a commit that referenced this pull request May 28, 2020
This is to reland #38675, and test cpp_extension compatible in _test only, this is enough, the purpose of this test is to make sure pytorch and cpp extension are compatible with xenial + cuda 9.2 + gcc 5.4

There are two non gcc5.4 (+ cuda9.2) compatible change introduced recently:
#37849
#38627
which caused the following problems:
https://app.circleci.com/pipelines/github/pytorch/pytorch/173756/workflows/7445e169-9c26-4ec4-a23a-ff6160d155b1/jobs/5582207/steps
https://app.circleci.com/pipelines/github/pytorch/pytorch/173970/workflows/bf0de0f2-9156-4c8f-a097-53ca8e20d4b0/jobs/5589265/steps

The root cause is that gcc 5.4 does not support uniform initialization list well, it can not deduce a correct type in some cases. It probably bugs in the gcc 5 compiler,  I modified these code a little bit to make them compatible with cuda 9.2 + gcc 5.4.

People are still using xenial + gcc5.4 + cuda 9.x, this env should be covered until xenial is deprecated. 

Differential Revision: [D21731026](https://our.internmc.facebook.com/intern/diff/D21731026)

[ghstack-poisoned]
glaringlee pushed a commit that referenced this pull request May 28, 2020
This is to reland #38675, and test cpp_extension compatibility in _test only, this is enough, the purpose of this test is to make sure pytorch and cpp extension are compatible with xenial + cuda 9.2 + gcc 5.4

There are two non gcc5.4 (+ cuda9.2) compatible change introduced recently:
#37849
#38627
which caused the following problems:
https://app.circleci.com/pipelines/github/pytorch/pytorch/173756/workflows/7445e169-9c26-4ec4-a23a-ff6160d155b1/jobs/5582207/steps
https://app.circleci.com/pipelines/github/pytorch/pytorch/173970/workflows/bf0de0f2-9156-4c8f-a097-53ca8e20d4b0/jobs/5589265/steps

The root cause is that gcc 5.4 does not support uniform initialization list well, it can not deduce a correct type in some cases. It probably bugs in the gcc 5 compiler,  I modified these code a little bit to make them compatible with cuda 9.2 + gcc 5.4.

People are still using xenial + gcc5.4 + cuda 9.x, this env should be covered until xenial is deprecated. 

Differential Revision: [D21731026](https://our.internmc.facebook.com/intern/diff/D21731026)

[ghstack-poisoned]
glaringlee pushed a commit that referenced this pull request May 28, 2020
This is to reland #38675, and test cpp_extension compatibility in _test only, this is enough, the purpose of this test is to make sure pytorch and cpp extension are compatible with xenial + cuda 9.2 + gcc 5.4

There are two non gcc5.4 (+ cuda9.2) compatible change introduced recently:
#37849
#38627
which caused the following problems:
https://app.circleci.com/pipelines/github/pytorch/pytorch/173756/workflows/7445e169-9c26-4ec4-a23a-ff6160d155b1/jobs/5582207/steps
https://app.circleci.com/pipelines/github/pytorch/pytorch/173970/workflows/bf0de0f2-9156-4c8f-a097-53ca8e20d4b0/jobs/5589265/steps

The root cause is that gcc 5.4 does not support uniform initialization list well, it can not deduce a correct type in some cases. It probably bugs in the gcc 5 compiler,  I modified these code a little bit to make them compatible with cuda 9.2 + gcc 5.4.

People are still using xenial + gcc5.4 + cuda 9.x, this env should be covered until xenial is deprecated. 

Differential Revision: [D21731026](https://our.internmc.facebook.com/intern/diff/D21731026)

[ghstack-poisoned]
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
Summary: Pull Request resolved: pytorch#38675

Test Plan: Imported from OSS

Differential Revision: D21626921

Pulled By: glaringlee

fbshipit-source-id: b645845aa831cb64078fe2309881038138abb443
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants