Fix flakiness in pfcwd/test_pfcwd_cli.py#19969
Merged
StormLiangMS merged 8 commits intosonic-net:masterfrom Aug 12, 2025
Merged
Fix flakiness in pfcwd/test_pfcwd_cli.py#19969StormLiangMS merged 8 commits intosonic-net:masterfrom
StormLiangMS merged 8 commits intosonic-net:masterfrom
Conversation
Collaborator
|
/azp run |
|
Azure Pipelines successfully started running 1 pipeline(s). |
Collaborator
|
/azp run |
|
Azure Pipelines successfully started running 1 pipeline(s). |
Collaborator
|
/azp run |
|
Azure Pipelines successfully started running 1 pipeline(s). |
224c3af to
4af662d
Compare
Collaborator
|
/azp run |
|
Azure Pipelines successfully started running 1 pipeline(s). |
4af662d to
f9b2a21
Compare
Collaborator
|
/azp run |
|
Azure Pipelines successfully started running 1 pipeline(s). |
Collaborator
|
/azp run |
|
Azure Pipelines successfully started running 1 pipeline(s). |
lipxu
reviewed
Aug 1, 2025
Collaborator
|
/azp run |
|
Azure Pipelines successfully started running 1 pipeline(s). |
lipxu
approved these changes
Aug 1, 2025
Collaborator
|
/azp run |
|
Azure Pipelines successfully started running 1 pipeline(s). |
Collaborator
|
/azp run |
|
Azure Pipelines successfully started running 1 pipeline(s). |
Collaborator
|
@vivekverma-arista PR conflicts with 202411 branch |
Collaborator
|
@vivekverma-arista PR conflicts with 202505 branch |
11 tasks
Contributor
Author
|
202505 cherry pick: #20247 |
11 tasks
Contributor
Author
|
202411 cherry pick: #20248 |
ashutosh-agrawal
pushed a commit
to ashutosh-agrawal/sonic-mgmt
that referenced
this pull request
Aug 14, 2025
What is the motivation for this PR? Recent fix: sonic-net#17411 The test was flaky before this fix (and continues to be so). When the test picks up an egress interface which happens to be a member of a LAG consisting of multiple members, only this member is stormed and some of the traffic successfully egresses out of the other LAG members leading to lesser drops than expected when PFCWD is triggered with DROP action. The proposed fix was to shut down all but one LAG members by reducing the number of min_links. But the same config on cEOS was missing therefore LAG doesn't come up after shutting down other LAG members. This is being rectified in this change for cEOS neighbors. How did you do it? The proposed fix is to change the min_link setting for the involved port channel on the cEOS side as well. How did you verify/test it? Stressed this test 10 times on dualtor-120 and t0-116 with Arista 7260CX3 platform.
vidyac86
pushed a commit
to vidyac86/sonic-mgmt
that referenced
this pull request
Oct 23, 2025
What is the motivation for this PR? Recent fix: sonic-net#17411 The test was flaky before this fix (and continues to be so). When the test picks up an egress interface which happens to be a member of a LAG consisting of multiple members, only this member is stormed and some of the traffic successfully egresses out of the other LAG members leading to lesser drops than expected when PFCWD is triggered with DROP action. The proposed fix was to shut down all but one LAG members by reducing the number of min_links. But the same config on cEOS was missing therefore LAG doesn't come up after shutting down other LAG members. This is being rectified in this change for cEOS neighbors. How did you do it? The proposed fix is to change the min_link setting for the involved port channel on the cEOS side as well. How did you verify/test it? Stressed this test 10 times on dualtor-120 and t0-116 with Arista 7260CX3 platform.
opcoder0
pushed a commit
to opcoder0/sonic-mgmt
that referenced
this pull request
Dec 8, 2025
What is the motivation for this PR? Recent fix: sonic-net#17411 The test was flaky before this fix (and continues to be so). When the test picks up an egress interface which happens to be a member of a LAG consisting of multiple members, only this member is stormed and some of the traffic successfully egresses out of the other LAG members leading to lesser drops than expected when PFCWD is triggered with DROP action. The proposed fix was to shut down all but one LAG members by reducing the number of min_links. But the same config on cEOS was missing therefore LAG doesn't come up after shutting down other LAG members. This is being rectified in this change for cEOS neighbors. How did you do it? The proposed fix is to change the min_link setting for the involved port channel on the cEOS side as well. How did you verify/test it? Stressed this test 10 times on dualtor-120 and t0-116 with Arista 7260CX3 platform. Signed-off-by: opcoder0 <110003254+opcoder0@users.noreply.github.com>
gshemesh2
pushed a commit
to gshemesh2/sonic-mgmt
that referenced
this pull request
Dec 16, 2025
What is the motivation for this PR? Recent fix: sonic-net#17411 The test was flaky before this fix (and continues to be so). When the test picks up an egress interface which happens to be a member of a LAG consisting of multiple members, only this member is stormed and some of the traffic successfully egresses out of the other LAG members leading to lesser drops than expected when PFCWD is triggered with DROP action. The proposed fix was to shut down all but one LAG members by reducing the number of min_links. But the same config on cEOS was missing therefore LAG doesn't come up after shutting down other LAG members. This is being rectified in this change for cEOS neighbors. How did you do it? The proposed fix is to change the min_link setting for the involved port channel on the cEOS side as well. How did you verify/test it? Stressed this test 10 times on dualtor-120 and t0-116 with Arista 7260CX3 platform. Signed-off-by: Guy Shemesh <gshemesh@nvidia.com>
AharonMalkin
pushed a commit
to AharonMalkin/sonic-mgmt
that referenced
this pull request
Dec 16, 2025
What is the motivation for this PR? Recent fix: sonic-net#17411 The test was flaky before this fix (and continues to be so). When the test picks up an egress interface which happens to be a member of a LAG consisting of multiple members, only this member is stormed and some of the traffic successfully egresses out of the other LAG members leading to lesser drops than expected when PFCWD is triggered with DROP action. The proposed fix was to shut down all but one LAG members by reducing the number of min_links. But the same config on cEOS was missing therefore LAG doesn't come up after shutting down other LAG members. This is being rectified in this change for cEOS neighbors. How did you do it? The proposed fix is to change the min_link setting for the involved port channel on the cEOS side as well. How did you verify/test it? Stressed this test 10 times on dualtor-120 and t0-116 with Arista 7260CX3 platform. Signed-off-by: Aharon Malkin <amalkin@nvidia.com>
gshemesh2
pushed a commit
to gshemesh2/sonic-mgmt
that referenced
this pull request
Dec 21, 2025
What is the motivation for this PR? Recent fix: sonic-net#17411 The test was flaky before this fix (and continues to be so). When the test picks up an egress interface which happens to be a member of a LAG consisting of multiple members, only this member is stormed and some of the traffic successfully egresses out of the other LAG members leading to lesser drops than expected when PFCWD is triggered with DROP action. The proposed fix was to shut down all but one LAG members by reducing the number of min_links. But the same config on cEOS was missing therefore LAG doesn't come up after shutting down other LAG members. This is being rectified in this change for cEOS neighbors. How did you do it? The proposed fix is to change the min_link setting for the involved port channel on the cEOS side as well. How did you verify/test it? Stressed this test 10 times on dualtor-120 and t0-116 with Arista 7260CX3 platform. Signed-off-by: Guy Shemesh <gshemesh@nvidia.com>
venu-nexthop
pushed a commit
to venu-nexthop/sonic-mgmt
that referenced
this pull request
Jan 13, 2026
What is the motivation for this PR? Recent fix: sonic-net#17411 The test was flaky before this fix (and continues to be so). When the test picks up an egress interface which happens to be a member of a LAG consisting of multiple members, only this member is stormed and some of the traffic successfully egresses out of the other LAG members leading to lesser drops than expected when PFCWD is triggered with DROP action. The proposed fix was to shut down all but one LAG members by reducing the number of min_links. But the same config on cEOS was missing therefore LAG doesn't come up after shutting down other LAG members. This is being rectified in this change for cEOS neighbors. How did you do it? The proposed fix is to change the min_link setting for the involved port channel on the cEOS side as well. How did you verify/test it? Stressed this test 10 times on dualtor-120 and t0-116 with Arista 7260CX3 platform.
gshemesh2
pushed a commit
to gshemesh2/sonic-mgmt
that referenced
this pull request
Jan 26, 2026
What is the motivation for this PR? Recent fix: sonic-net#17411 The test was flaky before this fix (and continues to be so). When the test picks up an egress interface which happens to be a member of a LAG consisting of multiple members, only this member is stormed and some of the traffic successfully egresses out of the other LAG members leading to lesser drops than expected when PFCWD is triggered with DROP action. The proposed fix was to shut down all but one LAG members by reducing the number of min_links. But the same config on cEOS was missing therefore LAG doesn't come up after shutting down other LAG members. This is being rectified in this change for cEOS neighbors. How did you do it? The proposed fix is to change the min_link setting for the involved port channel on the cEOS side as well. How did you verify/test it? Stressed this test 10 times on dualtor-120 and t0-116 with Arista 7260CX3 platform. Signed-off-by: Guy Shemesh <gshemesh@nvidia.com>
12 tasks
ytzur1
pushed a commit
to ytzur1/sonic-mgmt
that referenced
this pull request
Feb 2, 2026
What is the motivation for this PR? Recent fix: sonic-net#17411 The test was flaky before this fix (and continues to be so). When the test picks up an egress interface which happens to be a member of a LAG consisting of multiple members, only this member is stormed and some of the traffic successfully egresses out of the other LAG members leading to lesser drops than expected when PFCWD is triggered with DROP action. The proposed fix was to shut down all but one LAG members by reducing the number of min_links. But the same config on cEOS was missing therefore LAG doesn't come up after shutting down other LAG members. This is being rectified in this change for cEOS neighbors. How did you do it? The proposed fix is to change the min_link setting for the involved port channel on the cEOS side as well. How did you verify/test it? Stressed this test 10 times on dualtor-120 and t0-116 with Arista 7260CX3 platform. Signed-off-by: Yael Tzur <ytzur@nvidia.com>
venu-nexthop
pushed a commit
to venu-nexthop/sonic-mgmt
that referenced
this pull request
Mar 27, 2026
What is the motivation for this PR? Recent fix: sonic-net#17411 The test was flaky before this fix (and continues to be so). When the test picks up an egress interface which happens to be a member of a LAG consisting of multiple members, only this member is stormed and some of the traffic successfully egresses out of the other LAG members leading to lesser drops than expected when PFCWD is triggered with DROP action. The proposed fix was to shut down all but one LAG members by reducing the number of min_links. But the same config on cEOS was missing therefore LAG doesn't come up after shutting down other LAG members. This is being rectified in this change for cEOS neighbors. How did you do it? The proposed fix is to change the min_link setting for the involved port channel on the cEOS side as well. How did you verify/test it? Stressed this test 10 times on dualtor-120 and t0-116 with Arista 7260CX3 platform.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description of PR
Summary:
Fixes #714, #18496
Type of change
Back port request
Approach
What is the motivation for this PR?
Recent fix: #17411
The test was flaky before this fix (and continues to be so). When the test picks up an egress interface which happens to be a member of a LAG consisting of multiple members, only this member is stormed and some of the traffic successfully egresses out of the other LAG members leading to lesser drops than expected when PFCWD is triggered with DROP action. The proposed fix was to shut down all but one LAG members by reducing the number of min_links. But the same config on cEOS was missing therefore LAG doesn't come up after shutting down other LAG members.
This is being rectified in this change for cEOS neighbors.
How did you do it?
The proposed fix is to change the min_link setting for the involved port channel on the cEOS side as well.
How did you verify/test it?
Stressed this test 10 times on dualtor-120 and t0-116 with Arista 7260CX3 platform.
Any platform specific information?
Supported testbed topology if it's a new test case?
Documentation