How do I create a single setup function all my nose test cases that is only called once during initialization? I have a global configuration that only needs to be set once and I feel that adding the following to each module (even calling a setup function for each module) is a bit superfluous:
def setUp(self):
Configuration.configure('some configuration settings')
I figured it out! Nose provides package-level setup and teardown as documented here. All I have to do is define the setup method in the package's __init__.py file.
Here, you can see an example of how to use the setup function. To make things simple:
lines = []
def setup():
global lines
lines.append('test') # here, we can trigger a build
# and read in a file, for example
def test_this():
assert lines[0] == 'test'
Related
I really couldn't find a solution for :
PytestCollectionWarning: cannot collect test class 'TestBlaBla' because it has a init constructor
Here is my testing module and it needs to take the arguments outside of this file because at the end of the day I'm gonna call all test modules in a single file and run them all over different names and there is a bunch of names. I have to init them but when I run pytest it always ignores these classes. Idk how to handle without initializing them. If there is any suggestions I would be glad to hear.
tests/test_bla_bla.py
class TestBlaBla():
def __init__(self, **kwargs):
self.name1 = kwargs.get("name1")
self.name2 = kwargs.get("name2")
#pytest.fixture(scope='session')
def load_data_here(self):
return load_data(self.name1) # a function comes from a utils file. it only stands for load data and it needs take a name for path
..
"continue with test_ functions that use output of load_data_here"
tests/main.py
class TestingAll:
def __init__(self, *args, **kwargs):
self.name1 = kwargs.get("name1")
self.name2 = kwargs.get("name2")
self._process()
def _process(self):
TestBlaBla(name1 = self.name1, name2= self.name2)
TestBlaBla2(name1 = self.name1, name2= self.name2)
if __name__ == "__main__":
Test = TestingAll(name1 = "name1", name2= "name2")
Python test modules cannot have init methods as python test instantiates the class itself and there is not any way (IMHO?) to extend the instantiation and add arguments.
Yes, it is a natural idea to want to make your tests flexible by passing in command-line arguments. But you can't :-P. So you need to find another way of doing this.
Note also the if name == 'main': can work if you call the test file with python and add some code to explicitly call a py test runner. Note you do not just call your test class. A python test runner needs to be instantiated itself and run tests in a particular way.
e.g. we can have this which will allow python to instantiate and run tests in a TestingAll class (as long as it doesn't have an init method).
This uses the unittest.TextTestRunner
Note there are all sorts of python test runners, also runners like nose2 or like py.test which use a different test library.
if __name__ == '__main__':
unittest.main()
suite = unittest.TestLoader().loadTestsFromTestCase(TestingAll)
unittest.TextTestRunner(verbosity=3).run(suite)
You could maybe have an ArgsProcess class to process command line args.
Then iterate and set a global var with each value to be used and call the test runner each time.
But it depends on how your tests will be used.
The answers on this question already mentioned in comments explain and link to documentation for this warning:
py.test skips test class if constructor is defined
The answer on this question shows how an init method is replaced by using fixture:
Pytest collection warning due to __init__ constructor
Maybe something like this would work for you.
I need to write a unittest setUp function to run before each test function. The problem is the test functions are not in a TestCase inherited class and I cannot put them inside a class because they used tons of fixtures that need lots of hand work to make them work.
Any suggestion about how to write setUp function outside of a class?
If this doesn't answer the question, then I think the question needs a lot more detail and context:
def setUp():
#...do setUp stuff
def test1():
setUp()
# ... do test 1 stuff
def test2():
setUp()
# ... do test 2 stuff
Imagine I have implemented a utility (maybe a class) called Bar in a module foo, and have written the following tests for it.
test_foo.py:
from foo import Bar as Implementation
from pytest import mark
#mark.parametrize(<args>, <test data set 1>)
def test_one(<args>):
<do something with Implementation and args>
#mark.parametrize(<args>, <test data set 2>)
def test_two(<args>):
<do something else with Implementation and args>
<more such tests>
Now imagine that, in the future I expect different implementations of the same interface to be written. I would like those implementations to be able to reuse the tests that were written for the above test suite: The only things that need to change are
The import of the Implementation
<test data set 1>, <test data set 2> etc.
So I am looking for a way to write the above tests in a reusable way, that would allow authors of new implementations of the interface to be able to use the tests by injecting the implementation and the test data into them, without having to modify the file containing the original specification of the tests.
What would be a good, idiomatic way of doing this in pytest?
====================================================================
====================================================================
Here is a unittest version that (isn't pretty but) works.
define_tests.py:
# Single, reusable definition of tests for the interface. Authors of
# new implementations of the interface merely have to provide the test
# data, as class attributes of a class which inherits
# unittest.TestCase AND this class.
class TheTests():
def test_foo(self):
# Faking pytest.mark.parametrize by looping
for args, in_, out in self.test_foo_data:
self.assertEqual(self.Implementation(*args).foo(in_),
out)
def test_bar(self):
# Faking pytest.mark.parametrize by looping
for args, in_, out in self.test_bar_data:
self.assertEqual(self.Implementation(*args).bar(in_),
out)
v1.py:
# One implementation of the interface
class Implementation:
def __init__(self, a,b):
self.n = a+b
def foo(self, n):
return self.n + n
def bar(self, n):
return self.n - n
v1_test.py:
# Test for one implementation of the interface
from v1 import Implementation
from define_tests import TheTests
from unittest import TestCase
# Hook into testing framework by inheriting unittest.TestCase and reuse
# the tests which *each and every* implementation of the interface must
# pass, by inheritance from define_tests.TheTests
class FooTests(TestCase, TheTests):
Implementation = Implementation
test_foo_data = (((1,2), 3, 6),
((4,5), 6, 15))
test_bar_data = (((1,2), 3, 0),
((4,5), 6, 3))
Anybody (even a client of the library) writing another implementation of this interface
can reuse the set of tests defined in define_tests.py
inject own test data into the tests
without modifying any of the original files
This is a great use case for parametrized test fixtures.
Your code could look something like this:
from foo import Bar, Baz
#pytest.fixture(params=[Bar, Baz])
def Implementation(request):
return request.param
def test_one(Implementation):
assert Implementation().frobnicate()
This would have test_one run twice: once where Implementation=Bar and once where Implementation=Baz.
Note that since Implementation is just a fixture, you can change its scope, or do more setup (maybe instantiate the class, maybe configure it somehow).
If used with the pytest.mark.parametrize decorator, pytest will generate all the permutations. For example, assuming the code above, and this code here:
#pytest.mark.parametrize('thing', [1, 2])
def test_two(Implementation, thing):
assert Implementation(thing).foo == thing
test_two will run four times, with the following configurations:
Implementation=Bar, thing=1
Implementation=Bar, thing=2
Implementation=Baz, thing=1
Implementation=Baz, thing=2
You can't do it without class inheritance, but you don't have to use unittest.TestCase. To make it more pytest you can use fixtures.
It allows you for example fixture parametrizing, or use another fixures.
I try create simple example.
class SomeTest:
#pytest.fixture
def implementation(self):
return "A"
def test_a(self, implementation):
assert "A" == implementation
class OtherTest(SomeTest):
#pytest.fixture(params=["B", "C"])
def implementation(self, request):
return request.param
def test_a(self, implementation):
""" the "implementation" fixture is not accessible out of class """
assert "A" == implementation
and second test fails
def test_a(self, implementation):
> assert "A" == implementation
E assert 'A' == 'B'
E - A
E + B
def test_a(self, implementation):
> assert "A" == implementation
E assert 'A' == 'C'
E - A
E + C
def test_a(implementation):
fixture 'implementation' not found
Don't forget you have to define python_class = *Test in pytest.ini
I did something similar to what #Daniel Barto was saying, adding additional fixtures.
Let's say you have 1 interface and 2 implementations:
class Imp1(InterfaceA):
pass # Some implementation.
class Imp2(InterfaceA):
pass # Some implementation.
You can indeed encapsulate testing in subclasses:
#pytest.fixture
def imp_1():
yield Imp1()
#pytest.fixture
def imp_2():
yield Imp2()
class InterfaceToBeTested:
#pytest.fixture
def imp(self):
pass
def test_x(self, imp):
assert imp.test_x()
def test_y(self, imp):
assert imp.test_y()
class TestImp1(InterfaceToBeTested):
#pytest.fixture
def imp(self, imp_1):
yield imp_1
def test_1(self, imp):
assert imp.test_1()
class TestImp2(InterfaceToBeTested):
#pytest.fixture
def imp(self, imp_2):
yield imp_2
Note: Notice how by adding an additional derived class and overriding the fixture that returns the implementation you can run all tests on it, and that in case there are implementation-specific tests, they could be written there as well.
Conditional Plugin Based Solution
There is in fact a technique that leans on the pytest_plugins list where you can condition its value on something that transcends pytest, namely environment variables and command line arguments. Consider the following:
if os.environ["pytest_env"] == "env_a":
pytest_plugins = [
"projX.plugins.env_a",
]
elif os.environ["pytest_env"] == "env_b":
pytest_plugins = [
"projX.plugins.env_b",
]
I authored a GitHub repository to share some pytest experiments demonstrating the above techniques with commentary along the way and test run results. The relevant section to this particular question is the conditional_plugins experiment.
https://github.com/jxramos/pytest_behavior
This would position you to use the same test module with two different implementations of an identically named fixture. However you'd need to invoke the test once per each implementation with the selection mechanism singling out the fixture implementation of interest. Therefore you'd need two pytest sessions to accomplish testing the two fixture variations.
In order to reuse the tests you have in place you'd need to establish a root directory higher than the project you're trying to reuse and define a conftest.py file there that does the plugin selection. That still may not be enough because the overriding behavior of the test module and any intermediate conftest files if you leave the directory structure as is. But if you're free to reshuffle files and leave them unchanged, you just need to get the existing conftest file out of the line of the path from the test module to the root directory and rename it so it can be detected as a plugin instead.
Configuration / Command line Selection of Plugins
Pytest actually has a -p command line option where you can list multiple plugins back to back to specify the plugin files. You can learn more of that control by looking in the ini_plugin_selection experiment in the pytest_behavior repo.
Parametrization over Fixture Values
As of this writing this is a work in progress for core pytest functionality but there is a third party plugin pytest-cases which supports the notion where a fixture itself can be used as a parameter to a test case. With that capability you can parametrize over multiple fixtures for the same test case, where each fixture is backed by each API implementation. This sounds like the ideal solution to your use case, however you would still need to decorate the existing test module with new source to permit this parametrization over fixtures which may not be permissible by you.
Take a look at this rich discussion in an open pytest issue #349 Using fixtures in pytest.mark.parametrize, specifically this comment. He links to a concrete example he wrote up that demonstrates the new fixture parametrization syntax.
Commentary
I get the sense that the test fixture hierarchy one can build above a test module all the way up to the execution's root directory is something more oriented towards fixture reuse but not so much test module reuse. If you think about it you can write several fixtures way up in a common subfolder where a bunch of test modules branch out potentially landing deep down in a number of child subdirectories. Each of those test modules would have access to fixtures defined in that parent conftest.py, but without doing extra work they only get one definition per fixture across all those intermediate conftest.py files even if the same name is reused across that hierarchy. The fixture is chosen closest to the test module through the pytest fixture overriding mechanism, but the resolving stops at the test module and does not go past it to any folders beneath the test module where variation might be found. Essentially there's only one path from the test module to the root dir which limits the fixture definitions to one. This gives us a one fixture to many test modules relationship.
I have encountered something mysterious, when using patch decorator from mock package integrated with pytest fixture.
I have two modules:
-----test folder
-------func.py
-------test_test.py
in func.py:
def a():
return 1
def b():
return a()
in test_test.py:
import pytest
from func import a,b
from mock import patch,Mock
#pytest.fixture(scope="module")
def brands():
return 1
mock_b=Mock()
#patch('test_test.b',mock_b)
def test_compute_scores(brands):
a()
It seems that patch decorate is not compatible with pytest fixture. Does anyone have a insight on that? Thanks
When using pytest fixture with mock.patch, test parameter order is crucial.
If you place a fixture parameter before a mocked one:
from unittest import mock
#mock.patch('my.module.my.class')
def test_my_code(my_fixture, mocked_class):
then the mock object will be in my_fixture and mocked_class will be search as a fixture:
fixture 'mocked_class' not found
But, if you reverse the order, placing the fixture parameter at the end:
from unittest import mock
#mock.patch('my.module.my.class')
def test_my_code(mocked_class, my_fixture):
then all will be fine.
As of Python3.3, the mock module has been pulled into the unittest library. There is also a backport (for previous versions of Python) available as the standalone library mock.
Combining these 2 libraries within the same test-suite yields the above-mentioned error:
E fixture 'fixture_name' not found
Within your test-suite's virtual environment, run pip uninstall mock, and make sure you aren't using the backported library alongside the core unittest library. When you re-run your tests after uninstalling, you would see ImportErrors if this were the case.
Replace all instances of this import with from unittest.mock import <stuff>.
Hopefully this answer on an old question will help someone.
First off, the question doesn't include the error, so we don't really know what's up. But I'll try to provide something that helped me.
If you want a test decorated with a patched object, then in order for it to work with pytest you could just do this:
#mock.patch('mocked.module')
def test_me(*args):
mocked_module = args[0]
Or for multiple patches:
#mock.patch('mocked.module1')
#mock.patch('mocked.module')
def test_me(*args):
mocked_module1, mocked_module2 = args
pytest is looking for the names of the fixtures to look up in the test function/method. Providing the *args argument gives us a good workaround the lookup phase. So, to include a fixture with patches, you could do this:
# from question
#pytest.fixture(scope="module")
def brands():
return 1
#mock.patch('mocked.module1')
def test_me(brands, *args):
mocked_module1 = args[0]
This worked for me running python 3.6 and pytest 3.0.6.
If you have multiple patches to be applied, order they are injected is important:
# from question
#pytest.fixture(scope="module")
def brands():
return 1
# notice the order
#patch('my.module.my.class1')
#patch('my.module.my.class2')
def test_list_instance_elb_tg(mocked_class2, mocked_class1, brands):
pass
This doesn't address your question directly, but there is the pytest-mock plugin which allows you to write this instead:
def test_compute_scores(brands, mock):
mock_b = mock.patch('test_test.b')
a()
a) For me the solution was to use a with block inside the test function instead of using a #patch decoration before the test function:
class TestFoo:
def test_baa(self, my_fixture):
with patch(
'module.Class.function_to_patch',
MagicMock(return_value='mocked_result')
) as mocked_function_to_patch:
result= my_fixture.baa('mocked_input')
assert result == 'mocked_result'
mocked_function_to_patch.assert_has_calls([
call('mocked_input')
])
This solution does work inside classes (that are used to structure/group my test methods). Using the with block, you don't need to worry about the order of the arguments. I find it more explicit then the injection mechanism but the code becomes ugly if you patch more then one variable. If you need to patch many dependencies, that might be a signal that your tested function does too many things and that you should refactor it, e.g. by extracting some of the functionality to extra functions.
b) If you are outside classes and do want a patched object to be injected as extra argument in a test method... please note that #patch does not support to define the mock as second argument of the decoration:
#patch('path.to.foo', MagicMock(return_value='foo_value'))
def test_baa(self, my_fixture, mocked_foo):
does not work.
=> Make sure to pass the path as only argument to the decoration. Then define the return value inside the test function:
#patch('path.to.foo')
def test_baa(self, my_fixture, mocked_foo):
mocked_foo.return_value = 'foo_value'
(Unfortunately, this does not seem to work inside classes.)
First let inject the fixture(s), then let inject the variables of the #patch decorations (e.g. 'mocked_foo').
The name of the injected fixture 'my_fixture' needs to be correct. It needs to match the name of the decorated fixture function (or the explicit name used in the fixture decoration).
The name of the injected patch variable 'mocked_foo' does not follow a distinct naming pattern. You can choose it as you like, independent from the corresponding path of the #patch decoration.
If you inject several patched variables, note that the order is reversed: the mocked instance belonging to the last #patch decoration is injected first:
#patch('path.to.foo')
#patch('path.to.qux')
def test_baa(self, my_fixture, mocked_qux, mocked_foo):
mocked_foo.return_value = 'foo_value'
I had the same problem and solution for me was to use mock library in 1.0.1 version (before I was using unittest.mock in 2.6.0 version). Now it works like a charm :)
I started using python's Nose to execute my functional tests.
I use it with SauceLab's service. I execute the tests from the command line and see the reports on Sauce dashboard.
Now, every test is a class containing setUp() , the_test() , and tearDown() methods.
Inside the setUp() method there are the capabilities passed to Sauce configuring the Browser/version/OS the test will run on.
def setUp(self):
#REMOTE
desired_capabilities = webdriver.DesiredCapabilities.FIREFOX
desired_capabilities['version'] = '21'
desired_capabilities['platform'] = 'Windows XP'
desired_capabilities['name'] = className.getName(self)
desired_capabilities['record-video'] = False
self.wd = webdriver.Remote(desired_capabilities=desired_capabilities,command_executor="http://the_username:the_API_key#ondemand.saucelabs.com:80/wd/hub")
self.wd.implicitly_wait(10)
I would like to do the following...:
Create a separate file containing the setUp and tearDown functions and just call them by name every time exactly where i need them(before and after the test/tests).
Now they exist inside each and every python file I have and they are the same piece of code.
Additionally I think there is a way that nose provides to automatically see the two functions and call them when is needed. Is it feasible?
Thank you in advance
Put them in a super class.
def MyTestCase(TestCase):
def setUp(self):
# do setup stuff
Then each of your tests can inherit from MyTestCase. You can then further over ride setUp or tearDown in each test class. But do remember to call the super classes init method as well.