Problem
Yes I know you shouldn't send arguments to a unittest but in this case I have to so that it can access a piece of hardware within the current framework. For reasons for which the detail is irrelevant, I can't use a simple board = myBoard.getBoard() method within the tests, otherwise I wouldn't be here asking you all.
Horrible Attempt 1
class Test(object):
#staticmethod
def __call__(board):
_board = board
global _board
logging.info('Entering a Unit Test')
suite = unittest.TestLoader()
suite = suite.loadTestsFromTestCase(TestCase)
unittest.TextTestRunner().run(suite)
This method will be called upon to perform a unittest at somepoint. This is the way things are set out in the current framework, I'm just following the rules.
You can see here I assign a global variable (horrible). This can then be accessed by the test methods easily:
class TestCase(unittest.TestCase):
def runTest(self):
logging.critical('running')
def setUp(self):
logging.critical('setup')
def test_Me(self):
logging.critical('me')
The result is it won't enter runTest but it will setUp before each test_ method. Pretty much what I'd like, I can deal with it not entering runTest
Horrible Attempt 2
I'd like to do the above but without global variables so I tried this method:
class Test(object):
#staticmethod
def __call__(board):
logging.info('Entering a basic Unit Test')
suite = unittest.TestSuite() #creates a suite for testing
suite.addTest(BasicCase(board))
unittest.TextTestRunner().run(suite)
class BasicCase(unittest.TestCase):
def __init__(self, board):
self.board = board
logging.critical('init')
super(BasicCase, self).__init__()
def runTest(self):
logging.critical('runtest')
def setUp(self):
logging.critical('reset')
infra.Reset.__call__(self.board) #allows me to pass the board in a much nicer way
def test_Connect(self):
logging.critical('connection')
Problem with the above is that it will obviously only do init, setUp and runTest but it'll miss any test_ methods.
If I add:
suite.addTest(BasicCase(board).test_Connect)
Then it unnecessarily calls init twice and I have to add an extra member variable to test_Connect to accept the test object. I can deal with this too as I can return a list of the test_ methods but it just seems like a pain to do. Also I don't think it runs setUp every time it enters a new test_ method. Not ideal.
So ...
So is there some other method I can use that allows me to quickly run everything like a unittest but allows me to pass the board object?
If I should be using a different python module, then please say but I'm using unittest due to the lovely outputs and assert methods I get.
Related
I'm looking to write unit tests for some code that uses an object's __subclasses__() method. Ultimately, I was trying to use __subclassess__() to keep track of classes dynamically imported into my application through a namespace package.
In some cases, my code base has test classes created within a pytest file. My problem is that when I run pytest in bulk on my source directory I find failures or errors in my code due to import polutions from these one-off test classes. This is because the pytest run maintains all of the imports as it runs through the source tree. On their own the tests pass fine, but in a sequence they fail, sometimes depending on the order in which they ran.
In my current code branch these __subclasses__() invocations are in the application code, but I have moved them out to tests here to demonstrate with a MVE:
In my/example.py
class MyClass(object):
def __init__(self):
pass
class MySubClass(MyClass):
def __init__(self):
super().__init__()
In my/test_subclass.py
from my.example import MyClass
class TestSubClass(MyClass):
def __init__(self):
super().__init__()
def test_TestSubClass():
assert issubclass(TestSubClass, MyClass)
In my/test_subclasses.py
from my.example import MySubClass, MyClass
def test_find_subclasses():
assert all([cls == MySubClass for cls in MyClass.__subclasses__()])
The result, when run on my machine, is that the test_find_subclasses() test fails due to the discovery of the TestSubClass when running after test_subclass.py:
def test_find_subclasses():
> assert all([cls == MySubClass for cls in MyClass.__subclasses__()])
E assert False
E + where False = all([True, False])
What is the best way to maintain a "clean" state during sequenced pytest runs so that I can avoid mangling imports?
As discussed in the comments, you probably don't want to hard-code the types that may extend MyClass, since you really can't predict what your application will need in the future. If you want to check subclassing, just check that it works at all:
def test_find_subclasses():
assert MySubClass in MyClass.__subclasses__()
Even more concisely, you could simply do
def test_find_subclasses():
assert issubclass(MySubClass, MyClass)
That being said, you can technically filter the classes you are looking through. In your particular case, you have a distinctive naming convention, so you can do something like
def only_production_classes(iterable):
return [cls for cls in iterable if not cls.__name__.lower().startswith('test')]
def test_find_subclasses():
assert all([cls == MySubClass for cls in only_production_classes(MyClass.__subclasses__())])
You can define only_production_classes using other criteria, like the module that the class appears in:
def only_production_classes(iterable):
return [cls for cls in iterable if not cls.__module__.lower().startswith('test_')]
You can't easily unlink class objects that have been loaded, so your idea of a "clean" test environment is not quite feasible. But you do have the option of filtering the data that you work with based on where it was imported from.
I have a class called Person(). It has a CURRENT_YEAR class variable, intended to be shared among all instances of the class.
I was hoping that each of my tests in a single module would get a fresh (new) object since I scoped the fixture as 'function'. However, when I change the CURRENT_YEAR in one test function, which happens using a class method that changes the Person.CURRENT_YEAR value, it persists into the next test function. So clearly the object isn't getting wiped out and recreated for each test.
The fixture is created in the conftest.py, accessible by all tests.
In the end, I broke it all down, and moved things around, but keep seeing the same thing. The Person() class is not getting instantiated more than once, as I would have expected. How should a fixture be created, so that each test_ function gets its own scope for the Class?
I've tried moving tests to separate modules, but it didn't help.
I tried making a second fixture, that returns a Person() object. No difference.
I've really stripped it down in the code below, so it's hopefully clear what I'm trying and why I'm confused.
project_root/tests/test_temp.py
import os,sys
tests = os.path.dirname(__file__)
project = os.path.dirname(tests)
sys.path.insert(0,project)
import pytest
from app.person import *
def test_current_year_changes(person_fixture):
import pytest
p = person_fixture
print(f"CY is {p.CURRENT_YEAR} and age is {p.get_age()}")
p.add_years_to_age(20)
print(f"CY is {p.CURRENT_YEAR} and age is {p.get_age()}")
assert p.CURRENT_YEAR == 20
def test_current_year_changes2(person_fixture2):
import pytest
p = person_fixture2
print(f"CY is {p.CURRENT_YEAR} and age is {p.get_age()}")
p.add_years_to_age(20)
print(f"CY is {p.CURRENT_YEAR} and age is {p.get_age()}")
assert p.CURRENT_YEAR == 20
#pytest.fixture(scope='function')
def person_fixture():
p = Person()
return p
#pytest.fixture(scope='function')
def person_fixture2():
p = Person()
return p
project_root/app/person.py
class Person(object):
CURRENT_YEAR = 0
def __init__(self, name=""):
self.name = name
self.birth_year = Person.CURRENT_YEAR
def add_years_to_age(self, years=1):
Person.CURRENT_YEAR += years
def get_age(self):
return Person.CURRENT_YEAR - self.birth_year
The code looks like both tests should be pretty independent. But the second test function shows that the CURRENT_YEAR is not starting with a new class variable.
The assert fails showing that the Person.CURRENT_YEAR is 40, instead of 20
The fixture scope just defines when the function that is decorated with #pytest.fixture is run. It's just a way to factor common test code into a separate function.
So in your case it's "function" so the fixture will execute the function for each test function (that uses the fixture) and it creates a Person instance. Similarly it would run once per test module if the scope were "module".
And that is working exactly as intended. It's not just working as intended by pytest but also as intended by yourself - remember that you actually wanted to share CURRENT_YEAR between different instances!
How should a fixture be created, so that each test_ function gets its own scope for the Class?
You really shouldn't use global or static variables (and class variables are just global variables hidden behind a class) exactly because it makes testing really hard (and make the program non-thread-safe). Also remember that pytest cannot provide the infrastructure to reset your program if you don't provide it! Think about it: What exactly should happen? Should it create a new interpreter session for each test function? Should it reload modules? Should it reload the class definition? Should it just set Person.CURRENT_YEAR to zero?
One way to solve this is to abstract the class variables for example with an environment class (the current year also doesn't seem a good fit for a Person class anyway):
class Environment(object):
def __init__(self):
self.CURRENT_YEAR = 0
class Person(object):
def __init__(self, environment, name=""):
self.environment = environment
self.name = name
self.birth_year = self.environment.CURRENT_YEAR
def add_years_to_age(self, years=1):
self.environment.CURRENT_YEAR += years
def get_age(self):
return self.environment.CURRENT_YEAR - self.birth_year
And then let the fixture create a new environment and person instance:
#pytest.fixture(scope='function')
def person_fixture():
e = Environment()
p = Person(e)
return p
At that point you probably need a global Environment instance in your code so that different Person instances can share it.
Note that this makes not much sense if it's just one variable and probably you end up with different classes for different environmental variables. If your app gets more complicated you probably need to think about dependency injection to manage that complexity.
However if you just want the CURRENT_YEAR to reset for each function that uses your person_fixture you could also just set it to 0 in the fixture:
#pytest.fixture(scope='function')
def person_fixture_with_current_year_reset():
Person.CURRENT_YEAR = 0
p = Person()
return p
That should work for now but at the time you run the tests in parallel you might see random failures because global variables (and class variables) are inherently non-thread-safe.
My automation framework uses pytest setup/teardown type of testing instead of fixtures. I also have several levels of classes:
BaseClass - highest, all tests inhriet from it
FeatureClass - medium, all tests related to the programs feature inherit from it
TestClass - hold the actual tests
edit, for examples sake, i change the DB calls to a simple print
I want to add DB report in all setup/teardowns. i.e. i want that the general BaseClass setup_method will create a DB entry for the test and teardown_method will alter the entry with the results. i have tried but i can't seem to get out of method the values of currently running test during run time. is it possible even? and if not, how could i do it otherwise?
samples:
(in base.py)
class Base(object):
test_number = 0
def setup_method(self, method):
Base.test_number += 1
self.logger.info(color.Blue("STARTING TEST"))
self.logger.info(color.Blue("Current Test: {}".format(method.__name__)))
self.logger.info(color.Blue("Test Number: {}".format(self.test_number)))
# --->here i'd like to do something with the actual test parameters<---
self.logger.info("print parameters here")
def teardown_method(self, method):
self.logger.info(color.Blue("Current Test: {}".format(method.__name__)))
self.logger.info(color.Blue("Test Number: {}".format(self.test_number)))
self.logger.info(color.Blue("END OF TEST"))
(in my_feature.py)
class MyFeature(base.Base):
def setup_method(self, method):
# enable this feature in program
return True
(in test_my_feature.py)
class TestClass(my_feature.MyFeature):
#pytest.mark.parametrize("fragment_length", [1,5,10])
def test_my_first_test(self):
# do stuff that is changed based on fragment_length
assert verify_stuff(fragment_length)
so how can i get the parameters in setup_method, of the basic parent class of the testing framework?
The brief answer: NO, you cannot do this. And YES, you can work around it.
A bit longer: these unittest-style setups & teardowns are done only for compatibility with the unittest-style tests. They do not support the pytest's fixture, which make pytest nice.
Due to this, neither pytest nor pytest's unittest plugin provide the context for these setup/teardown methods. If you would have a request, function or some other contextual objects, you could get the fixture's values dynamically via request.getfuncargvalue('my_fixture_name').
However, all you have is self/cls, and method as the test method object itself (i.e. not the pytest's node).
If you look inside of the _pytest/unittest.py plugin, you will find this code:
class TestCaseFunction(Function):
_excinfo = None
def setup(self):
self._testcase = self.parent.obj(self.name)
self._fix_unittest_skip_decorator()
self._obj = getattr(self._testcase, self.name)
if hasattr(self._testcase, 'setup_method'):
self._testcase.setup_method(self._obj)
if hasattr(self, "_request"):
self._request._fillfixtures()
First, note that the setup_method() is called fully isolated from the pytest's object (e.g. self as the test node).
Second, note that the fixtures are prepared after the setup_method() is called. So even if you could access them, they will not be ready.
So, generally, you cannot do this without some trickery.
For the trickery, you have to define a pytest hook/hookwrapper once, and remember the pytest node being executed:
conftest.py or any other plugin:
import pytest
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_protocol(item, nextitem):
item.cls._item = item
yield
test_me.py:
import pytest
class Base(object):
def setup_method(self, method):
length = self._item.callspec.getparam('fragment_length')
print(length)
class MyFeature(Base):
def setup_method(self, method):
super().setup_method(method)
class TestClass(MyFeature):
#pytest.mark.parametrize("fragment_length", [1,5,10])
def test_my_first_test(self, fragment_length):
# do stuff that is changed based on fragment_length
assert True # verify_stuff(fragment_length)
Also note that MyFeature.setup_method() must call the parent's super(...).setup_method() for obvious reasons.
The cls._item will be set on each callspec (i.e. each function call with each parameter). You can also put the item or the specific parameters into some other global state, if you wish.
Also be carefull not to save the field in the item.instance. The instance of the class will be created later, and you have to use setup_instance/teardown_instance method for that. Otherwise, the saved instance's field is not preserved and is not available as self._item in setup_method().
Here is the execution:
============ test session starts ============
......
collected 3 items
test_me.py::TestClass::test_my_first_test[1] 1
PASSED
test_me.py::TestClass::test_my_first_test[5] 5
PASSED
test_me.py::TestClass::test_my_first_test[10] 10
PASSED
============ 3 passed in 0.04 seconds ============
I am unhappy with TestCase.setUp(): If a test has a decorator, setUp() gets called outside the decorator.
I am not new to python, I can help myself, but I search a best practice solution.
TestCase.setUp() feels like a way to handle stuff before decorators where introduced in Python.
What is a clean solution to setup a test, if the setup should happen inside the decorator of the test method?
This would be a solution, but setUp() gets called twice:
class Test(unittest.TestCase):
def setUp(self):
...
#somedecorator
def test_foo(self):
self.setUp()
Example use case: somedecorator opens a database connection and it works like a with-statement: It can't be broken into two methods (setUp(), tearDown()).
Update
somedecorator is from a different application. I don't want to modify it. I could do some copy+paste, but that's not a good solution.
Ugly? Yes:
class Test(unittest.TestCase):
def actualSetUp(self):
...
def setUp(self):
if not any('test_foo' in i for i in inspect.stack()):
self.actualSetUp()
#somedecorator
def test_foo(self):
self.actualSetUp()
Works by inspecting the current stack and determining whether we are in any function called 'test_foo', so that should obviously be a unique name.
Probably cleaner to just create another TestCase class specifically for this test, where you can have no setUp method.
I'm using unittest to test my terminal interactive utility. I have 2 test cases with very similar contexts: one tests for correct output and other for correct handling of user commands in interactive mode. Although, both cases mock sys.stdout to suppress actual output (output is performed in process of interactive work as well).
Consider the following:
class StdoutOutputTestCase(unittest.TestCase):
"""Tests whether the stuff is printed correctly."""
def setUp(self):
self.patcher_stdout = mock.patch('sys.stdout', StringIO())
self.patcher_stdout.start()
# Do testing
def tearDown(self):
self.patcher_stdout.stop()
class UserInteractionTestCase(unittest.TestCase):
"""Tests whether user input is handled correctly."""
def setUp(self):
self.patcher_stdout = mock.patch('sys.stdout', StringIO())
self.patcher_stdout.start()
# Do testing
def tearDown(self):
self.patcher_stdout.stop()
What I don't like is that context setup is repeated here twice (for now; may be even more with time).
Is there a good way to set up common context for both cases? Can unittest.TestSuite help me? If yes, how? I couldn't find any example of common context setup.
I've also thought about defining a function setup_common_context, which would have been called from setUp of both cases, but it's still repetition.
I've solved this problem in my projects just by putting common setup code in a base class, and then putting test cases in derived classes. My derived class setUp and tearDown methods just call the super class implementations, and only do (de)initialization specific to those test cases. Also, keep in mind that you can put multiple tests in each test case, which might make sense if all the setup is the same.
class MyBaseTestCase(unittest.TestCase):
def setUp(self):
self.patcher_stdout = mock.patch('sys.stdout', StringIO())
self.patcher_stdout.start()
# Do **nothing**
def tearDown(self):
self.patcher_stdout.stop()
class StdoutOutputTestCase(MyBaseTestCase):
"""Tests whether the stuff is printed correctly."""
def setUp(self):
super(StdoutOutputTestCase, self).setUp()
# StdoutOutputTestCase specific set up code
# Do testing
def tearDown(self):
super(StdoutOutputTestCase, self).tearDown()
# StdoutOutputTestCase specific tear down code
class UserInteractionTestCase(MyBaseTestCase):
# Same pattern as StdoutOutputTestCase