How to run a method before all tests in all classes? - python

I'm writing selenium tests, with a set of classes, each class containing several tests. Each class currently opens and then closes Firefox, which has two consequences:
super slow, opening firefox takes longer than running the test in a class...
crashes, because after firefox has been closed, trying to reopen it really quickly, from selenium, results in an 'Error 54'
I could solve the error 54, probably, by adding a sleep, but it would still be super slow.
So, what I'd like to do is reuse the same Firefox instances across all test classes. Which means I need to run a method before all test classes, and another method after all test classes. So, 'setup_class' and 'teardown_class' are not sufficient.

Using session fixture as suggested by hpk42 is great solution for many cases,
but fixture will run only after all tests are collected.
Here are two more solutions:
conftest hooks
Write a pytest_configure or pytest_sessionstart hook in your conftest.py file:
# content of conftest.py
def pytest_configure(config):
"""
Allows plugins and conftest files to perform initial configuration.
This hook is called for every plugin and initial conftest
file after command line options have been parsed.
"""
def pytest_sessionstart(session):
"""
Called after the Session object has been created and
before performing collection and entering the run test loop.
"""
def pytest_sessionfinish(session, exitstatus):
"""
Called after whole test run finished, right before
returning the exit status to the system.
"""
def pytest_unconfigure(config):
"""
called before test process is exited.
"""
pytest plugin
Create a pytest plugin with pytest_configure and pytest_unconfigure hooks.
Enable your plugin in conftest.py:
# content of conftest.py
pytest_plugins = [
'plugins.example_plugin',
]
# content of plugins/example_plugin.py
def pytest_configure(config):
pass
def pytest_unconfigure(config):
pass

You might want to use a session-scoped "autouse" fixture:
# content of conftest.py or a tests file (e.g. in your tests or root directory)
#pytest.fixture(scope="session", autouse=True)
def do_something(request):
# prepare something ahead of all tests
request.addfinalizer(finalizer_function)
This will run ahead of all tests. The finalizer will be called after the last test finished.

Starting from version 2.10 there is a cleaner way to tear down the fixture as well as defining its scope. So you may use this syntax:
#pytest.fixture(scope="module", autouse=True)
def my_fixture():
print('INITIALIZATION')
yield param
print('TEAR DOWN')
The autouse parameter:
From documentation:
Here is how autouse fixtures work in other scopes:
autouse fixtures obey the scope= keyword-argument: if an autouse fixture has scope='session' it will only be run once, no matter where
it is defined. scope='class' means it will be run once per class, etc.
if an autouse fixture is defined in a test module, all its test functions automatically use it.
if an autouse fixture is defined in a conftest.py file then all tests in all test modules below its directory will invoke the fixture.
...
The "request" parameter:
Note that the "request" parameter is not necessary for your purpose although you might want to use it for other purposes. From documentation:
"Fixture function can accept the request object to introspect the
“requesting” test function, class or module context.."

Try to use pytest_sessionstart(session) in conftest.py
Example:
# project/tests/conftest.py
def pytest_sessionstart(session):
print('BEFORE')
# project/tests/tests_example/test_sessionstart.py
import pytest
#pytest.fixture(scope='module', autouse=True)
def fixture():
print('FIXTURE')
def test_sessonstart():
print('TEST')
Log:
BEFORE
============================================================================ test session starts =============================================================================
platform darwin -- Python 3.7.0, pytest-5.4.1, py-1.8.1, pluggy-0.13.1 -- /Library/Frameworks/Python.framework/Versions/3.7/bin/python3
cachedir: .pytest_cache
rootdir: /Users/user/Documents/test, inifile: pytest.ini
plugins: allure-pytest-2.8.12, env-0.6.2
collected 1 item
tests/6.1/test_sessionstart.py::test_sessonstart FIXTURE
TEST
PASSED

Related

How are pytest fixure scopes intended to work?

I want to use pytest fixtures to prepare an object I want to use across a set of tests.
I follow the documentation and create a fixture in something_fixture.py with its scope set to session like this:
import pytest
#pytest.fixture(scope="session")
def something():
return 'something'
Then in test_something.py I try to use the fixture like this:
def test_something(something):
assert something == 'something'
Which does not work, but if I import the fixture like this:
from tests.something_fixture import something
def test_something(something):
assert something == 'something'
the test passes...
Is this import necessary? Because to me this is not clear according to the documentation.
This session-scoped fixture should be defined in a conftest.py module, see conftest.py: sharing fixtures across multiple files in the docs.
The conftest.py file serves as a means of providing fixtures for an entire directory. Fixtures defined in a conftest.py can be used by any test in that package without needing to import them (pytest will automatically discover them).
By writing the fixture in something_fixture.py it was defined somewhere that went "unnoticed" because there was no reason for Python to import this module. The default test collection phase considers filenames matching these glob patterns:
- test_*.py
- *_test.py
Since it's a session-scoped feature, define it instead in a conftest.py file, so it will be created at test collection time and available to all tests.
You can remove the import statement from tests.something_fixture import something. In fact the "tests" subdirectory generally doesn't need to be importable at all.

Fixture order of execution in pytest

I'm new to using Pytest. I'm a little confused about what's the appropriate use of fixtures and dependency injection for my project. My framework is the following:
conftest.py
#pytest.fixture(scope="session")
def test_env(request):
//some setup here for the entire test module test_suite.py
#pytest.fixture(scope="function")
def setup_test_suite(test_env):
//some setup that needs to be done for each test function
//eventually parameterize? for different input configuration
test_suite.py
def test_function(setup_test_suite)
//some testing code here
What's the difference between putting the setup function in conftest.py or within the test suite itself?
I want the setup of the testing environment to be done once, for the entire session. If setup_test_suite is dependent on test_env, will test_env execute multiple times?
If I were to parameterize setup_test_suite, what would be the order of calling the fixtures?
The difference between a fixture in conftest.py and in the local test module is visibility. The fixture will be visible to all test modules at the same levels and in modules below that level. If the fixture is set to autouse=True it will be executed for all tests in these modules.
For more information, check What is the use of conftest.py files
If test_env is session-based, it will be executed only once in the session, even if it is referenced in multiple tests. If it yields an object, the same object will be passed to all tests referencing it.
For example:
#pytest.fixture(scope="session")
def global_fixt():
print('global fixture')
yield 42
#pytest.fixture(scope="function")
def local_fixt(global_fixt):
print('local fixture')
yield 5
def test_1(global_fixt, local_fixt):
print(global_fixt, local_fixt)
def test_2(global_fixt, local_fixt):
print(global_fixt, local_fixt)
Gives the output for pytest -svv:
...
collected 2 items
test_global_local_fixtures.py::test_1 global fixture
local fixture
42 5
PASSED
test_global_local_fixtures.py::test_2 local fixture
42 5
PASSED
...
Fixture parameters are called in the order they are provided to the fixture. if you have for example:
#pytest.fixture(scope="session", params=[100, 1, 42])
def global_fixt(request):
yield request.param
def test_1(global_fixt):
assert True
def test_2(global_fixt):
assert True
The tests will be executed in this order:
test_1 (100)
test_2 (100)
test_1 (1)
test_2 (1)
test_1 (42)
test_2 (42)

pytest: Reusable tests for different implementations of the same interface

Imagine I have implemented a utility (maybe a class) called Bar in a module foo, and have written the following tests for it.
test_foo.py:
from foo import Bar as Implementation
from pytest import mark
#mark.parametrize(<args>, <test data set 1>)
def test_one(<args>):
<do something with Implementation and args>
#mark.parametrize(<args>, <test data set 2>)
def test_two(<args>):
<do something else with Implementation and args>
<more such tests>
Now imagine that, in the future I expect different implementations of the same interface to be written. I would like those implementations to be able to reuse the tests that were written for the above test suite: The only things that need to change are
The import of the Implementation
<test data set 1>, <test data set 2> etc.
So I am looking for a way to write the above tests in a reusable way, that would allow authors of new implementations of the interface to be able to use the tests by injecting the implementation and the test data into them, without having to modify the file containing the original specification of the tests.
What would be a good, idiomatic way of doing this in pytest?
====================================================================
====================================================================
Here is a unittest version that (isn't pretty but) works.
define_tests.py:
# Single, reusable definition of tests for the interface. Authors of
# new implementations of the interface merely have to provide the test
# data, as class attributes of a class which inherits
# unittest.TestCase AND this class.
class TheTests():
def test_foo(self):
# Faking pytest.mark.parametrize by looping
for args, in_, out in self.test_foo_data:
self.assertEqual(self.Implementation(*args).foo(in_),
out)
def test_bar(self):
# Faking pytest.mark.parametrize by looping
for args, in_, out in self.test_bar_data:
self.assertEqual(self.Implementation(*args).bar(in_),
out)
v1.py:
# One implementation of the interface
class Implementation:
def __init__(self, a,b):
self.n = a+b
def foo(self, n):
return self.n + n
def bar(self, n):
return self.n - n
v1_test.py:
# Test for one implementation of the interface
from v1 import Implementation
from define_tests import TheTests
from unittest import TestCase
# Hook into testing framework by inheriting unittest.TestCase and reuse
# the tests which *each and every* implementation of the interface must
# pass, by inheritance from define_tests.TheTests
class FooTests(TestCase, TheTests):
Implementation = Implementation
test_foo_data = (((1,2), 3, 6),
((4,5), 6, 15))
test_bar_data = (((1,2), 3, 0),
((4,5), 6, 3))
Anybody (even a client of the library) writing another implementation of this interface
can reuse the set of tests defined in define_tests.py
inject own test data into the tests
without modifying any of the original files
This is a great use case for parametrized test fixtures.
Your code could look something like this:
from foo import Bar, Baz
#pytest.fixture(params=[Bar, Baz])
def Implementation(request):
return request.param
def test_one(Implementation):
assert Implementation().frobnicate()
This would have test_one run twice: once where Implementation=Bar and once where Implementation=Baz.
Note that since Implementation is just a fixture, you can change its scope, or do more setup (maybe instantiate the class, maybe configure it somehow).
If used with the pytest.mark.parametrize decorator, pytest will generate all the permutations. For example, assuming the code above, and this code here:
#pytest.mark.parametrize('thing', [1, 2])
def test_two(Implementation, thing):
assert Implementation(thing).foo == thing
test_two will run four times, with the following configurations:
Implementation=Bar, thing=1
Implementation=Bar, thing=2
Implementation=Baz, thing=1
Implementation=Baz, thing=2
You can't do it without class inheritance, but you don't have to use unittest.TestCase. To make it more pytest you can use fixtures.
It allows you for example fixture parametrizing, or use another fixures.
I try create simple example.
class SomeTest:
#pytest.fixture
def implementation(self):
return "A"
def test_a(self, implementation):
assert "A" == implementation
class OtherTest(SomeTest):
#pytest.fixture(params=["B", "C"])
def implementation(self, request):
return request.param
def test_a(self, implementation):
""" the "implementation" fixture is not accessible out of class """
assert "A" == implementation
and second test fails
def test_a(self, implementation):
> assert "A" == implementation
E assert 'A' == 'B'
E - A
E + B
def test_a(self, implementation):
> assert "A" == implementation
E assert 'A' == 'C'
E - A
E + C
def test_a(implementation):
fixture 'implementation' not found
Don't forget you have to define python_class = *Test in pytest.ini
I did something similar to what #Daniel Barto was saying, adding additional fixtures.
Let's say you have 1 interface and 2 implementations:
class Imp1(InterfaceA):
pass # Some implementation.
class Imp2(InterfaceA):
pass # Some implementation.
You can indeed encapsulate testing in subclasses:
#pytest.fixture
def imp_1():
yield Imp1()
#pytest.fixture
def imp_2():
yield Imp2()
class InterfaceToBeTested:
#pytest.fixture
def imp(self):
pass
def test_x(self, imp):
assert imp.test_x()
def test_y(self, imp):
assert imp.test_y()
class TestImp1(InterfaceToBeTested):
#pytest.fixture
def imp(self, imp_1):
yield imp_1
def test_1(self, imp):
assert imp.test_1()
class TestImp2(InterfaceToBeTested):
#pytest.fixture
def imp(self, imp_2):
yield imp_2
Note: Notice how by adding an additional derived class and overriding the fixture that returns the implementation you can run all tests on it, and that in case there are implementation-specific tests, they could be written there as well.
Conditional Plugin Based Solution
There is in fact a technique that leans on the pytest_plugins list where you can condition its value on something that transcends pytest, namely environment variables and command line arguments. Consider the following:
if os.environ["pytest_env"] == "env_a":
pytest_plugins = [
"projX.plugins.env_a",
]
elif os.environ["pytest_env"] == "env_b":
pytest_plugins = [
"projX.plugins.env_b",
]
I authored a GitHub repository to share some pytest experiments demonstrating the above techniques with commentary along the way and test run results. The relevant section to this particular question is the conditional_plugins experiment.
https://github.com/jxramos/pytest_behavior
This would position you to use the same test module with two different implementations of an identically named fixture. However you'd need to invoke the test once per each implementation with the selection mechanism singling out the fixture implementation of interest. Therefore you'd need two pytest sessions to accomplish testing the two fixture variations.
In order to reuse the tests you have in place you'd need to establish a root directory higher than the project you're trying to reuse and define a conftest.py file there that does the plugin selection. That still may not be enough because the overriding behavior of the test module and any intermediate conftest files if you leave the directory structure as is. But if you're free to reshuffle files and leave them unchanged, you just need to get the existing conftest file out of the line of the path from the test module to the root directory and rename it so it can be detected as a plugin instead.
Configuration / Command line Selection of Plugins
Pytest actually has a -p command line option where you can list multiple plugins back to back to specify the plugin files. You can learn more of that control by looking in the ini_plugin_selection experiment in the pytest_behavior repo.
Parametrization over Fixture Values
As of this writing this is a work in progress for core pytest functionality but there is a third party plugin pytest-cases which supports the notion where a fixture itself can be used as a parameter to a test case. With that capability you can parametrize over multiple fixtures for the same test case, where each fixture is backed by each API implementation. This sounds like the ideal solution to your use case, however you would still need to decorate the existing test module with new source to permit this parametrization over fixtures which may not be permissible by you.
Take a look at this rich discussion in an open pytest issue #349 Using fixtures in pytest.mark.parametrize, specifically this comment. He links to a concrete example he wrote up that demonstrates the new fixture parametrization syntax.
Commentary
I get the sense that the test fixture hierarchy one can build above a test module all the way up to the execution's root directory is something more oriented towards fixture reuse but not so much test module reuse. If you think about it you can write several fixtures way up in a common subfolder where a bunch of test modules branch out potentially landing deep down in a number of child subdirectories. Each of those test modules would have access to fixtures defined in that parent conftest.py, but without doing extra work they only get one definition per fixture across all those intermediate conftest.py files even if the same name is reused across that hierarchy. The fixture is chosen closest to the test module through the pytest fixture overriding mechanism, but the resolving stops at the test module and does not go past it to any folders beneath the test module where variation might be found. Essentially there's only one path from the test module to the root dir which limits the fixture definitions to one. This gives us a one fixture to many test modules relationship.

Can pytest fixtures be combined?

Can one fixture build on another in pytest? I have a very simple fixture called "cleaner" defined as:
import pytest
from mypackage import db
#pytest.fixture()
def cleaner(request):
def finalizer():
db.clear()
request.addfinalizer(finalizer)
then in my setup.cfg I have:
[pytest]
norecursedirs = .git venv
usefixtures = cleaner
This results in the database being truncated after each test, which is great. But now I want other fixtures I make to also call the finalizer from cleaner. Is there a way to define another fixture that somehow expands on or calls cleaner?
You have to declare that your other fixture depends on cleaner explicitly:
import pytest
#pytest.fixture
def cleaner(request):
def finalizer():
print '\n"cleaner" finalized'
print '\n"cleaner" fixture'
request.addfinalizer(finalizer)
#pytest.fixture
def other(cleaner):
print '\n"other" fixture'
def test_foo(other):
pass
Running this with py.test -s -v produces:
test_foo.py#16::test_foo
"cleaner" fixture
"other" fixture
PASSED
"cleaner" finalized

Why cant unittest.TestCases see my py.test fixtures?

I'm trying to use py.test's fixtures with my unit tests, in conjunction with unittest. I've put several fixtures in a conftest.py file at the top level of the project (as described here), decorated them with #pytest.fixture, and put their names as arguments to the test functions that require them.
The fixtures are registering correctly, as shown by py.test --fixtures test_stuff.py, but when I run py.test, I get NameError: global name 'my_fixture' is not defined. This appears to only occur when I use subclasses of unittest.TestCase—but the py.test docs seem to say that it plays well with unittest.
Why can't the tests see the fixtures when I use unittest.TestCase?
Doesn't work:
conftest.py
#pytest.fixture
def my_fixture():
return 'This is some fixture data'
test_stuff.py
import unittest
import pytest
class TestWithFixtures(unittest.TestCase):
def test_with_a_fixture(self, my_fixture):
print(my_fixture)
Works:
conftest.py
#pytest.fixture()
def my_fixture():
return 'This is some fixture data'
test_stuff.py
import pytest
class TestWithFixtures:
def test_with_a_fixture(self, my_fixture):
print(my_fixture)
I'm asking this question more out of curiosity; for now I'm just ditching unittest altogether.
While pytest supports receiving fixtures via test function arguments
for non-unittest test methods, unittest.TestCase methods cannot
directly receive fixture function arguments as implementing that is
likely to inflict on the ability to run general unittest.TestCase test
suites.
From the note section at the bottom of:
https://pytest.org/en/latest/unittest.html
It's possible to use fixtures with unittest.TestCasees. See that page for more information.
You can use the pytest fixtures in unittest.TestCase with the pytest option autouse. However, if you use the test_ for the unit method using the fixture the following error will appear:
Fixtures are not meant to be called directly,...
### conftest.py
#pytest.fixture
def my_fixture():
return 'This is some fixture data'
One solution is to use a prepare_fixture method to set fixtures as an attribute of the TestWithFixtures class, so that fixtures are available to all unit test methods.
### test_stuff.py
import unittest
import pytest
class TestWithFixtures(unittest.TestCase):
#pytest.fixture(autouse=True)
def prepare_fixture(self, my_fixture):
self.myfixture = my_fixture
def test_with_a_fixture(self):
print(self.myfixture)
define the fixture as an accessible variable, (like input in the following example). To define it, use request.cls.VARIABLE_NAME_YOU_DEFINE = RETURN_VALUE
use #pytest.mark.usefixtures("YOUR_FIXTURE") to use fixture outside of the unittest class, inside the unittest class, access the fixture by self.VARIABLE_NAME_YOU_DEFINE.
e.g.
import unittest
import pytest
#pytest.fixture(scope="class")
def test_input(request):
request.cls.input = {"key": "value"}
#pytest.mark.usefixtures("test_input")
class MyTestCase(unittest.TestCase):
def test_something(self):
self.assertEqual(self.input["key"], "value")

Categories