py.test: Temporary folder for the session scope - python

The tmpdir fixture in py.test uses the function scope and thus isn't available in a fixture with a broader scope such as session. However, this would be useful for some cases such as setting up a temporary PostgreSQL server (which of course shouldn't be recreated for each test).
Is there any clean way to get a temporary folder for a broader scope that does not involve writing my own fixture and accessing internal APIs of py.test?

Since pytest release 2.8 and above the session-scoped tmpdir_factory fixture is available. See the example below from the documentation.
# contents of conftest.py
import pytest
#pytest.fixture(scope='session')
def image_file(tmpdir_factory):
img = compute_expensive_image()
fn = tmpdir_factory.mktemp('data').join('img.png')
img.save(str(fn))
return fn
# contents of test_image.py
def test_histogram(image_file):
img = load_image(image_file)
# compute and test histogram

Unfortunately there is currently no way (2014) of doing this nicely. In the future py.test will introduce a new "any" scope or something similar for this, but that's the future.
Right now you have to do this manually yourself. However as you note you lose quite a few nice features: symlinks in /tmp to the last test, auto cleanup after a few test runs, sensibly named directories etc. If the directory is not too expensive I usually combine a session and function scoped fixture in the following way:
#pytest.fixture(scope='session')
def session_dir(request):
temp_dir = py.path.local(tempfile.mkdtemp())
request.addfinalizer(lambda: folder.remove(rec=1))
# Any extra setup here
return temp_dir
#pytest.fixture
def temp_dir(session_dir, tmpdir):
session_dir.copy(tmpdir)
return tmpdir
This creates a temporary directory which gets cleaned up after a test run, however for each test which actually needs it (by requesting temp_dir) gets a copy which is saved with the tmpdir semantics.
If tests actually need to share state via this directory then the finalizer of temp_dir would have to copy things back to the session_dir. This is however not a very good idea since it makes the tests reliant on the execution order and would also cause problems when using pytest-xdist.

I add a finalizer when I want to delete all temporary folders created in session.
_tmp_factory = None
#pytest.fixture(scope="session")
def tmp_factory(request, tmpdir_factory):
global _tmp_factory
if _tmp_factory is None:
_tmp_factory = tmpdir_factory
request.addfinalizer(cleanup)
return _tmp_factory
def cleanup():
root = _tmp_factory.getbasetemp().strpath
print "Cleaning all temporary folders from %s" % root
shutil.rmtree(root)
def test_deleting_temp(tmp_factory):
root_a = tmp_factory.mktemp('A')
root_a.join('foo.txt').write('hello world A')
root_b = tmp_factory.mktemp('B')
root_b.join('bar.txt').write('hello world B')
for root, _, files in os.walk(tmp_factory.getbasetemp().strpath):
for name in files:
print(os.path.join(root, name))
The output should be like:
/tmp/pytest-of-agp/pytest-0/.lock
/tmp/pytest-of-agp/pytest-0/A0/foo.txt
/tmp/pytest-of-agp/pytest-0/B0/bar.txt
Cleaning all temporary folders from /tmp/pytest-of-agp/pytest-0

Here's another approach. It looks like pytest doesn't remove temporary directories after test runs. The following is a regular function-scoped fixture.
# conftest.py
TMPDIRS = list()
#pytest.fixture
def tmpdir_session(tmpdir):
"""A tmpdir fixture for the session scope. Persists throughout the session."""
if not TMPDIRS:
TMPDIRS.append(tmpdir)
return TMPDIRS[0]
And to have persistent temporary directories across modules instead of the whole pytest session:
# conftest.py
TMPDIRS = dict()
#pytest.fixture
def tmpdir_module(request, tmpdir):
"""A tmpdir fixture for the module scope. Persists throughout the module."""
return TMPDIRS.setdefault(request.module.__name__, tmpdir)
Edit:
Here's another solution that doesn't involve global variables. pytest 1.8.0 introduced a tmpdir_factory fixture that we can use:
#pytest.fixture(scope='module')
def tmpdir_module(request, tmpdir_factory):
"""A tmpdir fixture for the module scope. Persists throughout the module."""
return tmpdir_factory.mktemp(request.module.__name__)
#pytest.fixture(scope='session')
def tmpdir_session(request, tmpdir_factory):
"""A tmpdir fixture for the session scope. Persists throughout the pytest session."""
return tmpdir_factory.mktemp(request.session.name)

Related

How are pytest fixure scopes intended to work?

I want to use pytest fixtures to prepare an object I want to use across a set of tests.
I follow the documentation and create a fixture in something_fixture.py with its scope set to session like this:
import pytest
#pytest.fixture(scope="session")
def something():
return 'something'
Then in test_something.py I try to use the fixture like this:
def test_something(something):
assert something == 'something'
Which does not work, but if I import the fixture like this:
from tests.something_fixture import something
def test_something(something):
assert something == 'something'
the test passes...
Is this import necessary? Because to me this is not clear according to the documentation.
This session-scoped fixture should be defined in a conftest.py module, see conftest.py: sharing fixtures across multiple files in the docs.
The conftest.py file serves as a means of providing fixtures for an entire directory. Fixtures defined in a conftest.py can be used by any test in that package without needing to import them (pytest will automatically discover them).
By writing the fixture in something_fixture.py it was defined somewhere that went "unnoticed" because there was no reason for Python to import this module. The default test collection phase considers filenames matching these glob patterns:
- test_*.py
- *_test.py
Since it's a session-scoped feature, define it instead in a conftest.py file, so it will be created at test collection time and available to all tests.
You can remove the import statement from tests.something_fixture import something. In fact the "tests" subdirectory generally doesn't need to be importable at all.

pytest: Reusable tests for different implementations of the same interface

Imagine I have implemented a utility (maybe a class) called Bar in a module foo, and have written the following tests for it.
test_foo.py:
from foo import Bar as Implementation
from pytest import mark
#mark.parametrize(<args>, <test data set 1>)
def test_one(<args>):
<do something with Implementation and args>
#mark.parametrize(<args>, <test data set 2>)
def test_two(<args>):
<do something else with Implementation and args>
<more such tests>
Now imagine that, in the future I expect different implementations of the same interface to be written. I would like those implementations to be able to reuse the tests that were written for the above test suite: The only things that need to change are
The import of the Implementation
<test data set 1>, <test data set 2> etc.
So I am looking for a way to write the above tests in a reusable way, that would allow authors of new implementations of the interface to be able to use the tests by injecting the implementation and the test data into them, without having to modify the file containing the original specification of the tests.
What would be a good, idiomatic way of doing this in pytest?
====================================================================
====================================================================
Here is a unittest version that (isn't pretty but) works.
define_tests.py:
# Single, reusable definition of tests for the interface. Authors of
# new implementations of the interface merely have to provide the test
# data, as class attributes of a class which inherits
# unittest.TestCase AND this class.
class TheTests():
def test_foo(self):
# Faking pytest.mark.parametrize by looping
for args, in_, out in self.test_foo_data:
self.assertEqual(self.Implementation(*args).foo(in_),
out)
def test_bar(self):
# Faking pytest.mark.parametrize by looping
for args, in_, out in self.test_bar_data:
self.assertEqual(self.Implementation(*args).bar(in_),
out)
v1.py:
# One implementation of the interface
class Implementation:
def __init__(self, a,b):
self.n = a+b
def foo(self, n):
return self.n + n
def bar(self, n):
return self.n - n
v1_test.py:
# Test for one implementation of the interface
from v1 import Implementation
from define_tests import TheTests
from unittest import TestCase
# Hook into testing framework by inheriting unittest.TestCase and reuse
# the tests which *each and every* implementation of the interface must
# pass, by inheritance from define_tests.TheTests
class FooTests(TestCase, TheTests):
Implementation = Implementation
test_foo_data = (((1,2), 3, 6),
((4,5), 6, 15))
test_bar_data = (((1,2), 3, 0),
((4,5), 6, 3))
Anybody (even a client of the library) writing another implementation of this interface
can reuse the set of tests defined in define_tests.py
inject own test data into the tests
without modifying any of the original files
This is a great use case for parametrized test fixtures.
Your code could look something like this:
from foo import Bar, Baz
#pytest.fixture(params=[Bar, Baz])
def Implementation(request):
return request.param
def test_one(Implementation):
assert Implementation().frobnicate()
This would have test_one run twice: once where Implementation=Bar and once where Implementation=Baz.
Note that since Implementation is just a fixture, you can change its scope, or do more setup (maybe instantiate the class, maybe configure it somehow).
If used with the pytest.mark.parametrize decorator, pytest will generate all the permutations. For example, assuming the code above, and this code here:
#pytest.mark.parametrize('thing', [1, 2])
def test_two(Implementation, thing):
assert Implementation(thing).foo == thing
test_two will run four times, with the following configurations:
Implementation=Bar, thing=1
Implementation=Bar, thing=2
Implementation=Baz, thing=1
Implementation=Baz, thing=2
You can't do it without class inheritance, but you don't have to use unittest.TestCase. To make it more pytest you can use fixtures.
It allows you for example fixture parametrizing, or use another fixures.
I try create simple example.
class SomeTest:
#pytest.fixture
def implementation(self):
return "A"
def test_a(self, implementation):
assert "A" == implementation
class OtherTest(SomeTest):
#pytest.fixture(params=["B", "C"])
def implementation(self, request):
return request.param
def test_a(self, implementation):
""" the "implementation" fixture is not accessible out of class """
assert "A" == implementation
and second test fails
def test_a(self, implementation):
> assert "A" == implementation
E assert 'A' == 'B'
E - A
E + B
def test_a(self, implementation):
> assert "A" == implementation
E assert 'A' == 'C'
E - A
E + C
def test_a(implementation):
fixture 'implementation' not found
Don't forget you have to define python_class = *Test in pytest.ini
I did something similar to what #Daniel Barto was saying, adding additional fixtures.
Let's say you have 1 interface and 2 implementations:
class Imp1(InterfaceA):
pass # Some implementation.
class Imp2(InterfaceA):
pass # Some implementation.
You can indeed encapsulate testing in subclasses:
#pytest.fixture
def imp_1():
yield Imp1()
#pytest.fixture
def imp_2():
yield Imp2()
class InterfaceToBeTested:
#pytest.fixture
def imp(self):
pass
def test_x(self, imp):
assert imp.test_x()
def test_y(self, imp):
assert imp.test_y()
class TestImp1(InterfaceToBeTested):
#pytest.fixture
def imp(self, imp_1):
yield imp_1
def test_1(self, imp):
assert imp.test_1()
class TestImp2(InterfaceToBeTested):
#pytest.fixture
def imp(self, imp_2):
yield imp_2
Note: Notice how by adding an additional derived class and overriding the fixture that returns the implementation you can run all tests on it, and that in case there are implementation-specific tests, they could be written there as well.
Conditional Plugin Based Solution
There is in fact a technique that leans on the pytest_plugins list where you can condition its value on something that transcends pytest, namely environment variables and command line arguments. Consider the following:
if os.environ["pytest_env"] == "env_a":
pytest_plugins = [
"projX.plugins.env_a",
]
elif os.environ["pytest_env"] == "env_b":
pytest_plugins = [
"projX.plugins.env_b",
]
I authored a GitHub repository to share some pytest experiments demonstrating the above techniques with commentary along the way and test run results. The relevant section to this particular question is the conditional_plugins experiment.
https://github.com/jxramos/pytest_behavior
This would position you to use the same test module with two different implementations of an identically named fixture. However you'd need to invoke the test once per each implementation with the selection mechanism singling out the fixture implementation of interest. Therefore you'd need two pytest sessions to accomplish testing the two fixture variations.
In order to reuse the tests you have in place you'd need to establish a root directory higher than the project you're trying to reuse and define a conftest.py file there that does the plugin selection. That still may not be enough because the overriding behavior of the test module and any intermediate conftest files if you leave the directory structure as is. But if you're free to reshuffle files and leave them unchanged, you just need to get the existing conftest file out of the line of the path from the test module to the root directory and rename it so it can be detected as a plugin instead.
Configuration / Command line Selection of Plugins
Pytest actually has a -p command line option where you can list multiple plugins back to back to specify the plugin files. You can learn more of that control by looking in the ini_plugin_selection experiment in the pytest_behavior repo.
Parametrization over Fixture Values
As of this writing this is a work in progress for core pytest functionality but there is a third party plugin pytest-cases which supports the notion where a fixture itself can be used as a parameter to a test case. With that capability you can parametrize over multiple fixtures for the same test case, where each fixture is backed by each API implementation. This sounds like the ideal solution to your use case, however you would still need to decorate the existing test module with new source to permit this parametrization over fixtures which may not be permissible by you.
Take a look at this rich discussion in an open pytest issue #349 Using fixtures in pytest.mark.parametrize, specifically this comment. He links to a concrete example he wrote up that demonstrates the new fixture parametrization syntax.
Commentary
I get the sense that the test fixture hierarchy one can build above a test module all the way up to the execution's root directory is something more oriented towards fixture reuse but not so much test module reuse. If you think about it you can write several fixtures way up in a common subfolder where a bunch of test modules branch out potentially landing deep down in a number of child subdirectories. Each of those test modules would have access to fixtures defined in that parent conftest.py, but without doing extra work they only get one definition per fixture across all those intermediate conftest.py files even if the same name is reused across that hierarchy. The fixture is chosen closest to the test module through the pytest fixture overriding mechanism, but the resolving stops at the test module and does not go past it to any folders beneath the test module where variation might be found. Essentially there's only one path from the test module to the root dir which limits the fixture definitions to one. This gives us a one fixture to many test modules relationship.

How to run a method before all tests in all classes?

I'm writing selenium tests, with a set of classes, each class containing several tests. Each class currently opens and then closes Firefox, which has two consequences:
super slow, opening firefox takes longer than running the test in a class...
crashes, because after firefox has been closed, trying to reopen it really quickly, from selenium, results in an 'Error 54'
I could solve the error 54, probably, by adding a sleep, but it would still be super slow.
So, what I'd like to do is reuse the same Firefox instances across all test classes. Which means I need to run a method before all test classes, and another method after all test classes. So, 'setup_class' and 'teardown_class' are not sufficient.
Using session fixture as suggested by hpk42 is great solution for many cases,
but fixture will run only after all tests are collected.
Here are two more solutions:
conftest hooks
Write a pytest_configure or pytest_sessionstart hook in your conftest.py file:
# content of conftest.py
def pytest_configure(config):
"""
Allows plugins and conftest files to perform initial configuration.
This hook is called for every plugin and initial conftest
file after command line options have been parsed.
"""
def pytest_sessionstart(session):
"""
Called after the Session object has been created and
before performing collection and entering the run test loop.
"""
def pytest_sessionfinish(session, exitstatus):
"""
Called after whole test run finished, right before
returning the exit status to the system.
"""
def pytest_unconfigure(config):
"""
called before test process is exited.
"""
pytest plugin
Create a pytest plugin with pytest_configure and pytest_unconfigure hooks.
Enable your plugin in conftest.py:
# content of conftest.py
pytest_plugins = [
'plugins.example_plugin',
]
# content of plugins/example_plugin.py
def pytest_configure(config):
pass
def pytest_unconfigure(config):
pass
You might want to use a session-scoped "autouse" fixture:
# content of conftest.py or a tests file (e.g. in your tests or root directory)
#pytest.fixture(scope="session", autouse=True)
def do_something(request):
# prepare something ahead of all tests
request.addfinalizer(finalizer_function)
This will run ahead of all tests. The finalizer will be called after the last test finished.
Starting from version 2.10 there is a cleaner way to tear down the fixture as well as defining its scope. So you may use this syntax:
#pytest.fixture(scope="module", autouse=True)
def my_fixture():
print('INITIALIZATION')
yield param
print('TEAR DOWN')
The autouse parameter:
From documentation:
Here is how autouse fixtures work in other scopes:
autouse fixtures obey the scope= keyword-argument: if an autouse fixture has scope='session' it will only be run once, no matter where
it is defined. scope='class' means it will be run once per class, etc.
if an autouse fixture is defined in a test module, all its test functions automatically use it.
if an autouse fixture is defined in a conftest.py file then all tests in all test modules below its directory will invoke the fixture.
...
The "request" parameter:
Note that the "request" parameter is not necessary for your purpose although you might want to use it for other purposes. From documentation:
"Fixture function can accept the request object to introspect the
“requesting” test function, class or module context.."
Try to use pytest_sessionstart(session) in conftest.py
Example:
# project/tests/conftest.py
def pytest_sessionstart(session):
print('BEFORE')
# project/tests/tests_example/test_sessionstart.py
import pytest
#pytest.fixture(scope='module', autouse=True)
def fixture():
print('FIXTURE')
def test_sessonstart():
print('TEST')
Log:
BEFORE
============================================================================ test session starts =============================================================================
platform darwin -- Python 3.7.0, pytest-5.4.1, py-1.8.1, pluggy-0.13.1 -- /Library/Frameworks/Python.framework/Versions/3.7/bin/python3
cachedir: .pytest_cache
rootdir: /Users/user/Documents/test, inifile: pytest.ini
plugins: allure-pytest-2.8.12, env-0.6.2
collected 1 item
tests/6.1/test_sessionstart.py::test_sessonstart FIXTURE
TEST
PASSED

Nose test single setup function called once

How do I create a single setup function all my nose test cases that is only called once during initialization? I have a global configuration that only needs to be set once and I feel that adding the following to each module (even calling a setup function for each module) is a bit superfluous:
def setUp(self):
Configuration.configure('some configuration settings')
I figured it out! Nose provides package-level setup and teardown as documented here. All I have to do is define the setup method in the package's __init__.py file.
Here, you can see an example of how to use the setup function. To make things simple:
lines = []
def setup():
global lines
lines.append('test') # here, we can trigger a build
# and read in a file, for example
def test_this():
assert lines[0] == 'test'

pytest fixture with scope session running for every test

Correct me if I'm wrong, but if a fixture is defined with scope="session", shouldn't it be run only once per the whole pytest run?
For example:
import pytest
#pytest.fixture
def foo(scope="session"):
print('foooooo')
def test_foo(foo):
assert False
def test_bar(foo):
assert False
I have some tests that rely on data retrieved from some APIs, and instead of querying the API in each test, I rather have a fixture that gets all the data at once, and then each test uses the data it needs. However, I was noticing that for every test, a request was made to the API.
That's because you're declaring the fixture wrong. scope should go into the pytest.fixture decoraror parameters:
#pytest.fixture(scope="session")
def foo():
print('foooooo')
In your code, the scope is left to default value function, that's why the fixture is being ran for each test.

Categories