@@ -59,13 +59,18 @@ Setup
5959Running Tests
6060~~~~~~~~~~~~~
6161
62- The core test runner is in ``qcodes/test.py ``:
62+ We don't want to reinvent the wheel, and thus use py.test.
63+ It's easy to install:
6364
6465::
6566
66- python qcodes/test.py
67- # optional extra verbosity and fail fast
68- python qcodes/test.py -v -f
67+ pip install coverage pytest-cov pytest
68+
69+ Then to test and view the coverage:
70+
71+ ::
72+ py.test --cov=qcodes --cov-report xml --cov-config=.coveragerc
73+
6974
7075You can also run single tests with:
7176
@@ -78,92 +83,6 @@ You can also run single tests with:
7883 # or
7984 python -m unittest qcodes.tests.test_metadata.TestMetadatable.test_snapshot
8085
81- If you run the core test runner, you should see output that looks
82- something like this:
83-
84- ::
85-
86- .........***** found one MockMock, testing *****
87- ............................................Timing resolution:
88- startup time: 0.000e+00
89- min/med/avg/max dev: 9.260e-07, 9.670e-07, 1.158e-06, 2.109e-03
90- async sleep delays:
91- startup time: 2.069e-04
92- min/med/avg/max dev: 3.372e-04, 6.376e-04, 6.337e-04, 1.007e-03
93- multiprocessing startup delay and regular sleep delays:
94- startup time: 1.636e-02
95- min/med/avg/max dev: 3.063e-05, 2.300e-04, 2.232e-04, 1.743e-03
96- should go to stdout;should go to stderr;.stdout stderr stdout stderr ..[10:44:09.063 A Queue] should get printed
97- ...................................
98- ----------------------------------------------------------------------
99- Ran 91 tests in 4.192s
100-
101- OK
102- Name Stmts Miss Cover Missing
103- ----------------------------------------------------------
104- data/data_array.py 104 0 100%
105- data/data_set.py 179 140 22% 38-55, 79-94, 99-104, 123-135, 186-212, 215-221, 224-244, 251-254, 257-264, 272, 280-285, 300-333, 347-353, 360-384, 395-399, 405-407, 414-420, 426-427, 430, 433-438
106- data/format.py 225 190 16% 44-55, 61-62, 70, 78-97, 100, 114-148, 157-188, 232, 238, 246, 258-349, 352, 355-358, 361-368, 375-424, 427-441, 444, 447-451
107- data/io.py 76 50 34% 71-84, 90-91, 94, 97, 103, 109-110, 119-148, 154-161, 166, 169, 172, 175-179, 182, 185-186
108- data/manager.py 124 89 28% 15-20, 31, 34, 48-62, 65-67, 70, 76-77, 80-84, 90-102, 108-110, 117-121, 142-151, 154-182, 185, 188, 207-208, 215-221, 227-229, 237, 243, 249
109- instrument/base.py 74 0 100%
110- instrument/function.py 45 1 98% 77
111- instrument/ip.py 20 12 40% 10-16, 19-20, 24-25, 29-38
112- instrument/mock.py 63 0 100%
113- instrument/parameter.py 200 2 99% 467, 470
114- instrument/sweep_values.py 107 33 69% 196-207, 220-227, 238-252, 255-277
115- instrument/visa.py 36 24 33% 10-25, 28-32, 35-36, 40-41, 47-48, 57-58, 62-64, 68
116- loops.py 285 239 16% 65-74, 81-91, 120-122, 133-141, 153-165, 172-173, 188-207, 216-240, 243-313, 316-321, 324-350, 354-362, 371-375, 378-381, 414-454, 457-474, 477-484, 487-491, 510-534, 537-543, 559-561, 564, 577, 580, 590-608, 611-618, 627-628, 631
117- station.py 35 24 31% 17-32, 35, 45-50, 60, 67-82, 88
118- utils/helpers.py 95 0 100%
119- utils/metadata.py 13 0 100%
120- utils/multiprocessing.py 95 2 98% 125, 134
121- utils/sync_async.py 114 8 93% 166, 171-173, 176, 180, 184, 189-191
122- utils/timing.py 72 0 100%
123- utils/validators.py 110 0 100%
124- ----------------------------------------------------------
125- TOTAL 2072 814 61%
126-
127- The key is ``OK `` in the middle (that means all the tests passed), and
128- the presence of the coverage report after it. If any tests fail, we do
129- not show a coverage report, and the end of the output will contain
130- tracebacks and messages about what failed, for example:
131-
132- ::
133-
134- ======================================================================
135- FAIL: test_sweep_steps_edge_case (tests.test_instrument.TestParameters)
136- ----------------------------------------------------------------------
137- Traceback (most recent call last):
138- File "/Users/alex/qdev/Qcodes/qcodes/tests/test_instrument.py", line 360, in test_sweep_steps_edge_case
139- self.check_set_amplitude2('Off', log_count=1, history_count=2)
140- File "/Users/alex/qdev/Qcodes/qcodes/tests/test_instrument.py", line 345, in check_set_amplitude2
141- self.assertTrue(line.startswith('negative delay'), line)
142- AssertionError: False is not true : cannot sweep amplitude2 from 0.1 to Off - jumping.
143-
144- ----------------------------------------------------------------------
145- Ran 91 tests in 4.177s
146-
147- FAILED (failures=1)
148-
149- The coverage report is only useful if you have been adding new code, to
150- see whether your tests visit all of your code. Look at the file(s) you
151- have been working on, and ensure that the "missing" section does not
152- contain the line numbers of any of the blocks you have touched.
153- Currently the core still has a good deal of untested code - eventually
154- we will have all of this tested, but for now you can ignore all the rest
155- of the missing coverage.
156-
157- You can also run these tests from inside python. The output is similar
158- except that a) you don't get coverage reporting, and b) one test has to
159- be skipped because it does not apply within a notebook, so the output
160- will end ``OK (skipped=1) ``:
161-
162- .. code :: python
163-
164- import qcodes
165- qcodes.test_core() # optional verbosity = 1 (default) or 2
166-
16786If the tests pass, you should be ready to start developing!
16887
16988To tests actual instruments, first instantiate them in an interactive
@@ -314,23 +233,13 @@ and then unit testing should be run on pull-request, using CI. Maybe
314233simplify to a one command that says: if there's enough cover, and all
315234good or fail and where it fails.
316235
317- - The standard test commands are listed above under
318- :ref: `runnningtests `. More notes on different test runners can
319- be found in :ref: `testing `.
320-
321- - Core tests live in
322- `qcodes/tests <https://github.com/qdev-dk/Qcodes/tree/master/qcodes/tests >`__
323- and instrument tests live in the same directories as the instrument
324- drivers.
325-
326236- We should have a *few * high-level "integration" tests, but simple
327237 unit tests (that just depend on code in one module) are more valuable
328238 for several reasons:
329239- If complex tests fail it's more difficult to tell why
330240- When features change it is likely that more tests will need to change
331241- Unit tests can cover many scenarios much faster than integration
332242 tests.
333-
334243- If you're having difficulty making unit tests, first consider whether
335244 your code could be restructured to make it less dependent on other
336245 modules. Often, however, extra techniques are needed to break down a
@@ -339,9 +248,8 @@ good or fail and where it fails.
339248- Patching, one of the most useful parts of the
340249 `unittest.mock <https://docs.python.org/3/library/unittest.mock.html >`__
341250 library. This lets you specify exactly how other functions/objects
342- should behave when they're called by the code you are testing. For a
343- simple example, see
344- `test\_ multiprocessing.py <https://github.com/qdev-dk/Qcodes/blob/58a8692bed55272f4c5865d6ec37f846154ead16/qcodes/tests/test_multiprocessing.py#L63-L65 >`__
251+ should behave when they're called by the code you are testing.
252+
345253- Supporting files / data: Lets say you have a test of data acquisition
346254 and analysis. You can break that up into an acquisition test and an
347255 analysis by saving the intermediate state, namely the data file, in
0 commit comments