11

I'm using pytest to test some code based on TensorFlow.

A TestCase is defined for simplicity like:

class TestCase(tf.test.TestCase):
    # ...

The problem is tf.test.TestCase provide a useful function self.test_session(), which was treated as a test method in pytest since its name starts with test_.

The result pytest report more succeed tests than test methods I defined due to test_session() methods.

I use the following code to skip test_session:

class TestCase(tf.test.TestCase):
    @pytest.mark.skip
    @contextmanager
    def test_session(self):
        with super().test_session() as sess:
            yield sess

However there would be some "s" in test report indicating there are some skip tests.

Is there anyway I can mark one exact method not a test method without changing pytest test discovery rules globally?

5
  • What does the tf.test.TestCase class give you that you want to re-use? The best course of action would be to see if you can get that setup and teardown into a pytest fixture instead. Commented May 30, 2018 at 17:43
  • Call your function make_session or something that doesn't start test_. docs.pytest.org/en/latest/goodpractices.html#test-discovery Commented May 30, 2018 at 19:46
  • 1
    @Hitobat You're missing the point - test_session is inherited from the parent hence the name can't be changed. Commented Jun 1, 2018 at 14:41
  • Oh, I thought this was your code and you could change the name. Commented Jun 1, 2018 at 15:17
  • From Tensorflow's documentation test_session() seems to be deprecated in favour of cached_session(). In the code itself, no other method seems to call test_session(). It might be safe to use del tf.test.TestCase.test_session after it has been imported, wrapped in a try/catch if needed. Commented Jan 24, 2019 at 14:19

4 Answers 4

5

You can set __test__ = False, either directly, or by writing a simple decorator. The latter should behave similarly to Nose's nottest decorator.

def nottest(obj):
    obj.__test__ = False
    return obj

class TestMyTest:

    def test_should_not_collect_1(self):
        assert False
    test_should_not_collect_1.__test__ = False

    @nottest
    def test_should_not_collect_2(self):
        assert False

    def test_should_collect(self):
        assert True


def test_should_not_collect_1():
    assert False
test_should_not_collect_1.__test__ = False

@nottest
def test_should_not_collect_2():
    assert False


def test_should_collect():
    assert True

When running pytest, this only runs the methods which are not marked:

$ pytest test.py -v
====================================== test session starts ======================================
platform darwin -- Python 3.9.1, pytest-7.0.1, pluggy-1.0.0 -- /Users/lucaswiman/.pyenv/versions/3.9.1/bin/python3.9
cachedir: .pytest_cache
rootdir: /private/tmp
plugins: anyio-2.2.0
collected 2 items                                                                               

test.py::TestMyTest::test_should_collect PASSED                                           [ 50%]
test.py::test_should_collect PASSED                                                       [100%]

======================================= 2 passed in 0.04s =======================================

This behavior is documented here:

Since Pytest 2.6, users can prevent pytest from discovering classes that start with Test by setting a boolean __test__ attribute to False.

Sign up to request clarification or add additional context in comments.

Comments

4

Filter out false positives after the test items are collected: create a conftest.py in your tests directory with the custom post-collection hook:

# conftest.py
def pytest_collection_modifyitems(session, config, items):
    items[:] = [item for item in items if item.name != 'test_session']

pytest will still collect the test_session methods (you will notice that in the pytest report line collected n tests), but not execute them as tests and not consider them anywhere in the test run.


Related: fix for unittest-style tests

Check out this answer.

1 Comment

Amazing! This solution solved the problem perfectly. This configuration file still works even if I am using pytest in a sub directory.
1

I found this post by its title, not specifically about tensorflow, but because I wondered how to define an auxiliary function that is not run in itself when running unittest.main() (I wanted to use an auxiliary function to reuse code, and for instance in particular instantiate in a separate function, as is often said to be good practice). I am providing my solution for those who wondered the same.

A first solution is to define your function out of your subclass of unittest.TestCase(). In my case, I wanted to declare a function using a feature of unittest.TestCase, namely self.subTest(), consequently I could not do this.

Using decorator @unittest.skip is not a solution as it will be skipped even when called inside another function.

The second solution, very simple as well, is to NOT make your function's name start with 'test'.

If for some reason you still want to make it start with 'test', but it has a parameter (other than self), you will have to set it to a default value, because unittest will try to run it without inputting a value to this parameter, resulting in an error, considered as a failed test. A solution I found is to set a default value that is not a use case (in my case, None worked) and start the function's code by:

if param is None: 
    self.skipTest()

An example:

Suppose you want to test this function:

def my_fun(param1=1, param2=1):
    return(param1 / param2)

Then with this test code:

# test_my_fun.py

import unittest

class MyFunTestCase(unittest.TestCase):
    @unittest.skip('Skip aux_fun_skipped')
    def aux_fun_skipped(self):
        print("This is aux_fun_skipped.")

    def test_aux_fun_with_param_failing(self, param):
        print("This is aux_fun_with_params_failing, param={}".format(param))

    def test_aux_fun_with_param(self, param=None):
        if param is None:
            self.skipTest('Skipping as param is None')
        else:
            print("This is test_aux_fun_with_param, param={}".format(param))

    def aux_fun(self, param1, param2):
        with self.subTest(param1=param1):
            my_fun(param1=param1)
        with self.subTest(parm2=param2):
            my_fun(param2=param2)
        with self.subTest(param1=param1, param2=param2):
            my_fun(param1=param1, param2=param2)

    def test_something_relying_on_aux_fun_skipped(self):
        self.aux_fun_skipped()
        print("Call done.")

    def test_something_relying_on_aux_fun(self):
        self.test_aux_fun_with_param(4)
        for param1 in [5, 6]:
            for param2 in ([10, 11]):
                self.aux_fun(param1, param2)
        self.test_aux_fun_with_param_failing(3)
        print("Did all calls.")

if __name__ == '__main__':
    unittest.main()

By running $ python3 -m unittest -v I get the following output, which illustrates the cases I exposed:

test_aux_fun_with_param (test_ignore_test.MyFunTestCase) ... skipped 'Skipping as param is None'
test_aux_fun_with_param_failing (test_ignore_test.MyFunTestCase) ... ERROR
test_something_relying_on_aux_fun (test_ignore_test.MyFunTestCase) ... This is test_aux_fun_with_param, param=4
This is aux_fun_with_params_failing, param=3
Did all calls.
ok
test_something_relying_on_aux_fun_skipped (test_ignore_test.MyFunTestCase) ... skipped 'Skip aux_fun_skipped'

======================================================================
ERROR: test_aux_fun_with_param_failing (test_ignore_test.MyFunTestCase)
----------------------------------------------------------------------
TypeError: MyFunTestCase.test_aux_fun_with_param_failing() missing 1 required positional argument: 'param'

----------------------------------------------------------------------
Ran 4 tests in 0.002s

FAILED (errors=1, skipped=2)

In particular, we see that unittest did not try to run aux_fun in itself, but it runs test_something_relying_on_aux_fun which successfully calls it.

Finally, note that in unittest discovery feature, you can choose which file naming convention to use : https://docs.python.org/3/library/unittest.html#test-discovery with parameter --pattern , and not only use files whose name starts with test_, but AFAIK you don't have something similar for names of test functions inside a subclass of unittest.TestCase .

I hope this helps.

Comments

-2

There is way to do it in unittest

@unittest.skip("skipping reason")

tf.test has skipTest(reason) read more at https://www.tensorflow.org/api_docs/python/tf/test/TestCase#skipTest

1 Comment

The OP has already skipped the tests, but is not happy with the result.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.