Skip to content

Test completeness feature for Pytests#1927

Merged
vaibhavhd merged 8 commits intosonic-net:masterfrom
vaibhavhd:test-completeness
Jul 27, 2020
Merged

Test completeness feature for Pytests#1927
vaibhavhd merged 8 commits intosonic-net:masterfrom
vaibhavhd:test-completeness

Conversation

@vaibhavhd
Copy link
Copy Markdown
Contributor

@vaibhavhd vaibhavhd commented Jul 17, 2020

Description of PR

Summary: Test completeness feature for Pytests
Fixes # (issue)

Type of change

  • Bug fix
  • Testbed and Framework(new/improvement)
  • Test case(new/improvement)

Approach

What is the motivation for this PR?

To save time and allow running tests at different levels of completeness. Some tests are very elaborated and at present there is no way to run this tests partially or until a desired level. With the proposed feature, tests can now define multiple levels of completeness and a logical knob (Pytest marker) is provided to adjust the coverage of test execution.

How did you do it?

Test completeness is provided as a Pytest marker. These markers can be located at the module/test/etc level and the inner most level (if present) will supersede the higher levels.

How did you verify/test it?

Tested different Pytest tests at different levels. Also verified hierarchy at the module and test level.

link_flap testcase with --completeness_level=debug:

22:33:34 INFO __init__.py:check_test_completeness:140: Setting test completeness level. Specified: CompletenessLevel.debug. Defined: (<CompletenessLevel.debug: 0>, <CompletenessLevel.basic: 1>)
22:33:34 INFO __init__.py:check_test_completeness:157: Setting the completeness level to CompletenessLevel.debug
...
22:34:13 INFO test_link_flap.py:run_link_flap_test:75: Completeness level set: (<CompletenessLevel.debug: 0>,)
22:34:13 INFO test_link_flap.py:__toggle_one_link:33: Testing link flap on Ethernet8
22:34:14 INFO test_link_flap.py:__toggle_one_link:37: Shutting down  port Ethernet13 connecting to Ethernet8
22:34:24 INFO devices.py:shutdown:756: Shut interface [Ethernet13]
22:34:27 INFO test_link_flap.py:__toggle_one_link:43: Bring up  port Ethernet13 connecting to Ethernet8
22:34:37 INFO devices.py:no_shutdown:763: No shut interface [Ethernet13]
22:34:40 INFO test_link_flap.py:run_link_flap_test:83: Restoring fanout switch ports that were shut down by test
PASSED

link_flap testcase with --completeness_level=basic:

21:44:25 INFO __init__.py:check_test_completeness:140: Setting test completeness level. Specified: CompletenessLevel.basic. Defined: (<CompletenessLevel.debug: 0>, <CompletenessLevel.basic: 1>)
21:44:25 INFO __init__.py:check_test_completeness:157: Setting the completeness level to CompletenessLevel.basic
...
21:45:04 INFO test_link_flap.py:run_link_flap_test:74: Completeness level set: (<CompletenessLevel.basic: 1>,)
21:45:04 INFO test_link_flap.py:__toggle_one_link:32: Testing link flap on Ethernet8
21:45:05 INFO test_link_flap.py:__toggle_one_link:36: Shutting down  port Ethernet13 connecting to Ethernet8
21:45:16 INFO devices.py:shutdown:756: Shut interface [Ethernet13]
21:45:19 INFO test_link_flap.py:__toggle_one_link:42: Bring up  port Ethernet13 connecting to Ethernet8
21:45:28 INFO devices.py:no_shutdown:763: No shut interface [Ethernet13]
21:45:32 INFO test_link_flap.py:__toggle_one_link:32: Testing link flap on Ethernet9
21:45:33 INFO test_link_flap.py:__toggle_one_link:36: Shutting down  port Ethernet14 connecting to Ethernet9
21:45:43 INFO devices.py:shutdown:756: Shut interface [Ethernet14]
21:45:46 INFO test_link_flap.py:__toggle_one_link:42: Bring up  port Ethernet14 connecting to Ethernet9
21:45:56 INFO devices.py:no_shutdown:763: No shut interface [Ethernet14]
21:45:59 INFO test_link_flap.py:__toggle_one_link:32: Testing link flap on Ethernet0
21:46:00 INFO test_link_flap.py:__toggle_one_link:36: Shutting down  port Ethernet5 connecting to Ethernet0
21:46:10 INFO devices.py:shutdown:756: Shut interface [Ethernet5]
21:46:16 INFO test_link_flap.py:__toggle_one_link:42: Bring up  port Ethernet5 connecting to Ethernet0
21:46:26 INFO devices.py:no_shutdown:763: No shut interface [Ethernet5]
21:46:29 INFO test_link_flap.py:__toggle_one_link:32: Testing link flap on Ethernet1
21:46:31 INFO test_link_flap.py:__toggle_one_link:36: Shutting down  port Ethernet6 connecting to Ethernet1
21:46:40 INFO devices.py:shutdown:756: Shut interface [Ethernet6]
21:46:46 INFO test_link_flap.py:__toggle_one_link:42: Bring up  port Ethernet6 connecting to Ethernet1
21:46:56 INFO devices.py:no_shutdown:763: No shut interface [Ethernet6]
21:46:59 INFO test_link_flap.py:__toggle_one_link:32: Testing link flap on Ethernet6
21:47:01 INFO test_link_flap.py:__toggle_one_link:36: Shutting down  port Ethernet11 connecting to Ethernet6
21:47:11 INFO devices.py:shutdown:756: Shut interface [Ethernet11]
21:47:14 INFO test_link_flap.py:__toggle_one_link:42: Bring up  port Ethernet11 connecting to Ethernet6
21:47:24 INFO devices.py:no_shutdown:763: No shut interface [Ethernet11]
21:47:27 INFO test_link_flap.py:__toggle_one_link:32: Testing link flap on Ethernet7
21:47:29 INFO test_link_flap.py:__toggle_one_link:36: Shutting down  port Ethernet12 connecting to Ethernet7
21:47:39 INFO devices.py:shutdown:756: Shut interface [Ethernet12]
21:47:42 INFO test_link_flap.py:__toggle_one_link:42: Bring up  port Ethernet12 connecting to Ethernet7
21:47:51 INFO devices.py:no_shutdown:763: No shut interface [Ethernet12]
21:47:55 INFO test_link_flap.py:__toggle_one_link:32: Testing link flap on Ethernet4
21:47:56 INFO test_link_flap.py:__toggle_one_link:36: Shutting down  port Ethernet9 connecting to Ethernet4
21:48:06 INFO devices.py:shutdown:756: Shut interface [Ethernet9]
21:48:11 INFO test_link_flap.py:__toggle_one_link:42: Bring up  port Ethernet9 connecting to Ethernet4
21:48:21 INFO devices.py:no_shutdown:763: No shut interface [Ethernet9]
...
...
PASSED

Documentation

Feature details and usage are documented here - tests/common/plugins/test_completeness/README.md

topology: specify which topology testcase can be executed on: (t0, t1, ptf, etc)
platform: specify which platform testcase can be executed on: (physical, virtual, broadcom, mellanox, etc)
completeness_level: specify test completeness (coverage) level (Debug, Basic, Confident, Thorough)
supported_completeness_level: test supported levels of completeness (coverage) level (Debug, Basic, Confident, Thorough)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't feel necessary for test to declare supported levels.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Declared supported levels can be used for - 1) Global normalization. 2) If the test needs to supersede the module level of completeness.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still don't follow this either. Do you have an example?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For example, if a testcase has following supported markers:

pytestmark = [
    pytest.mark.supported_completeness_level(CompletenessLevel.basic, CompletenessLevel.thorough)
]

If this testcase is executed with CLI argument --completeness_level=confident, normalization is needed to decide what level test should be run at.
In this case, the normalization logic pulls all the supported levels from the testcase during pytest_runtest_setup, and decides the normalized value should be basic.

The testcase will finally be run in basic completeness level.

defined_levels = [mark.args for mark in request.node.iter_markers(name="completeness_level")]
logger.info("Completeness level set: {}".format(str(defined_levels)))

@pytest.mark.supported_completeness_level(CompletenessLevel.Debug)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this should be a marker.

By making it a maker, when a completeness level is specified. Test would be selected or deselected. This is not what we want. We want to have this as an argument so that the test could change behavior dynamically.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As per the discussion - markers seems to be an acceptable method to implement test-completeness.
Marker will not select or deselect the testcase necessarily - selecting/deselecting depends on whether pytest.skip is used.

In the proposed approach, markers enables the testcases to run at different completeness-levels. Global logic to normalize the level if the specified and defined levels do not match will be handled at the test setup level. This prevents duplication of normalization logic in every testcase. Also, after the completeness level is normalized, the testcase is free to decide what to do/execute at different levels of completeness.

help="only run tests matching the device DEV_TYPE ('physical', 'vs')")
parser.addoption("--completeness_level", metavar="TEST_LEVEL", action="store", type=int, \
help="Coverage level of test - partial to full execution.\n Defined levels: \
Debug = 0, Basic = 1, Confident = 2, Thorough = 3")
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Define both name and value are confusing, please stick with named levels.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, it will be much better if just named levels are used. Fixed this.

@lguohan
Copy link
Copy Markdown
Contributor

lguohan commented Jul 19, 2020

can we have a real example?

"markers", "device_type(DEV_TYPE): mark test to specify the need for a physical dut or vs only test. allowed values: 'physical', 'vs'"
)
config.addinivalue_line(
"markers", "device_type(TEST_LEVEL): mark test to specify the completeness level for the test. Allowed values: 'Debug', 'Basic' ,'Confident', 'Thorough'"
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

device_type(TEST_LEVEL) -> test_level(TEST_LEVEL)

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed now. Thanks.

@vaibhavhd vaibhavhd requested a review from yxieca July 21, 2020 22:08
@vaibhavhd
Copy link
Copy Markdown
Contributor Author

can we have a real example?

@lguohan - Sure, completeness level usage is now incorporated in link_flap testcase. Here is the link to the real example:
https://github.com/Azure/sonic-mgmt/pull/1927/files?file-filters%5B%5D=.ini&file-filters%5B%5D=.py#diff-b98ecbb312f65933eac7ab90b72f0f8c

4. Specified level matches one of the defined levels
'''
specified_level = item.config.getoption("--completeness_level")
if not specified_level: # Case 1
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This method is a little long and there's a bunch of different cases to cover. What I would suggest here is to have helper methods for the different cases, so something like:

  • set_default to handle cases 1 & 2
  • normalize_levels to handle case 3

And then once those two are covered you can just set the completeness level at the end and return. The comments are definitely helpful for following what is going on, but it's even easier/quicker to follow if you can just look at the method calls and figure out what each step is doing. :)

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed and added the methods to make it more easy to follow. Thanks.

topology: specify which topology testcase can be executed on: (t0, t1, ptf, etc)
platform: specify which platform testcase can be executed on: (physical, virtual, broadcom, mellanox, etc)
completeness_level: specify test completeness (coverage) level (Debug, Basic, Confident, Thorough)
supported_completeness_level: test supported levels of completeness (coverage) level (Debug, Basic, Confident, Thorough)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still don't follow this either. Do you have an example?

for fanout, fanout_port in self.ports_shutdown_by_test:
logging.debug("Restoring fanout switch {} port {} shut down by test".format(fanout.hostname, fanout_port))
fanout.no_shutdown(fanout_port)
normalized_level = [mark.args for mark in request.node.iter_markers(name="completeness_level")][0]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure that I understand why the mark is necessary if we're just fetching it from the request and checking it in the method. How is this different than having a global normalized_level variable that we can reference or making it a field of the testbed fixture or something similar?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There might be a scope of improvement and simplification.
To explain the machinery - there are two markers being used here:

  1. supported_completeness_level is used to advertise the supported levels. This marker is being used by the normalization logic to fetch the supported levels and normalize it against the specified level. This will happen during testcase setup for every testcase. I think this marker is still needed for the normalization to work during testcase setup.
  2. completeness_level is basically the normalized level that a testcase will be run against. This does NOT have to be a marker. Will the global variable not cause problem if there are two or more testcases being executed simultaneously? Making a field within testbed fixture definitely sounds more reasonable than just global variable.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't get that part of how this can be a field in testbed fixture. I thought normalization is done per testcase depending on what levels are supported by the testcase vs what the user requests.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right, Neetha. The normalization is done per testcase and testbed fixture itself will not be testcase specific. I think Danny is suggesting to add a field in the testbed fixture for every testcase during testcase setup step. It wouldn't be very different approach from just setting a global variable. I am open to suggestions.
My preference with marker was simply to stamp the testcase with its normalized level in a not-so-global variable. Hence a marker in the testcase's request data.
What are the concerns with a marker carrying this normalized value?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Update - modified the code to use just one marker instead of two. The normalized completeness level is now added to the list of supported levels.

for fanout, fanout_port in self.ports_shutdown_by_test:
logging.debug("Restoring fanout switch {} port {} shut down by test".format(fanout.hostname, fanout_port))
fanout.no_shutdown(fanout_port)
normalized_level = [mark.args for mark in request.node.iter_markers(name="completeness_level")][0]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't get that part of how this can be a field in testbed fixture. I thought normalization is done per testcase depending on what levels are supported by the testcase vs what the user requests.

@vaibhavhd vaibhavhd requested a review from neethajohn July 23, 2020 23:08
@vaibhavhd vaibhavhd merged commit d97c8df into sonic-net:master Jul 27, 2020
@vaibhavhd vaibhavhd deleted the test-completeness branch September 25, 2020 16:20
kazinator-arista pushed a commit to kazinator-arista/sonic-mgmt that referenced this pull request Mar 4, 2026
f81043b1f9ff02196629655f4735b33afd7f0ae1 (HEAD -> 202111, origin/202111) [port2alias]: Fix to get right number of return values (sonic-net#1906)
bbbf65943ec46e9330eadaed8bcdf1612cb8bd55 [CLI][show bgp] On chassis don't show internal BGP sessions by default (sonic-net#1927)
e12de7e7bf6cff3ec127f261bf88e4d29776d27b [port] Fix port speed set (sonic-net#1952)
cae7af752d484956d7fe40e4c3a849ddad460976 Fix invalid output of syslog IPv6 servers (sonic-net#1933)
6009341ddf790094166be5f0a81b4c114f00220b Routed subinterface enhancements (sonic-net#1821)
6ab9d67ca6550c592b97afb513804be474f84eb0 Enhance sfputil for CMIS QSFP (sonic-net#1949)
76cc67ba4f81c69b20efb3341808037c9db8f703 [debug dump] Refactoring Modules and Unit Tests (sonic-net#1943)
cff58a8171423e4012bc8caf9748996a1e98b7e2 Add command reference for trap flow counters (sonic-net#1876)
71cf3ee43524d56ad57dd90b937cfbf4bf63ba6a [Reclaim buffer] [Mellanox] Db migrator support reclaiming reserved buffer for unused ports (sonic-net#1822)
e699b49fb722e6d6fe5a1d2dacd2d39eb085c1e4 Add show command for BFD sessions (sonic-net#1942)
bb6c5774c843dbfad5f1ba00ee76dae7720902d1 [warm-reboot] Fix failures of warm reboot on disconnect of ssh session (sonic-net#1529)
2e8bbb308477862a76d2327fcf696875e8f08650 Add trap flow counter support (sonic-net#1868)
58407c1386ef13772a9a9320a795e380f162ab2c [load_minigraph] Delay pfcwd start until the buffer templates are rendered (sonic-net#1937)
eb388e0584ba1fe8d8dba58f1c5a148036ffe047 [sonic-package-manager] support sonic-cli-gen and packages with YANG model (sonic-net#1650)
2371d84e7d281bdb9988b5a1a012498dbbfb89ec generic_config_updater: Filename changed & VLAN validator added (sonic-net#1919)
7c0718dfaf23289d4ecc3ada9332e465c9a4e56b [config reload] Update command reference (sonic-net#1941)

Signed-off-by: Abhishek Dosi <abdosi@microsoft.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants