I'm trying to write unit tests using pytest for a configuration system that should look in a couple of different places for config files. I can use pyfakefs via the fs plugin to create a fixture that provides a set of files, including a config file in one location, but I'd like to have unit testing that both locations are checked and that the correct one is preferred.
My initial thought was that I could create the first fixture and then add a file:
#pytest.fixture()
def fake_files(fs):
fs.create_file('/path/to/datafile', contents="Foo")
yield fs
#pytest.fixture()
def user_config(fake_files):
fake_files.create_file('/path/to/user/config', contents="Bar")
yield fake_files
The idea behind this is that any test using fake_files would not find the config, but that using user_config would. However, unit tests using that user_config fixture do not find the file. Is this possible?
There is quite a lot more added in the first fixture in reality, so maintaining the two systems as two completely separate fixtures duplicates code and I am unclear if the underlying fs object can even be used in parallel.
Related
I want to run tests in pytest according to the execution time of the last run. Is there a plugin to do so? Or how would you write the plugin?
I guess I need to cache the execution times of the last run, but I'm not sure on how to do that.
Also, I'd like to run using the --ff option, giving priority to tests that failed first, and then ordering by execution time.
Thanks in advance!
I believe there is no such plugin but, as you said, you can write your own.
Since pytest plugin is just a set of hooks and fixtures may be you can go just with some hooks in your conftest.py file.
First of all you need to learn how to write your hooks here.
Then, referencing to this page find hooks that you need.
I believe that algorithm will be something like this:
Run tests
After test execution collect test results to some file (you will need test's full name, duration and status) OR use files generated by some pytest reporting plugin.
Run tests again
During pytest initialization stage register new command line option -ff
During pytest collection stage read saved file and -ff parameter value and change the test's execution order according to desired rules.
To implement this thing some of these hooks may be useful:
pytest_addoption -
Register argparse-style options and ini-style config values, called once at the beginning of a test run. Must be used to implement step 4.
pytest_collection_modifyitems - Called after collection has been performed. May filter or re-order the items in-place. Must be used to implement step 5. Test order must be changed in items object (list).
pytest_report_teststatus - Must be used to implement step 2. report object will contain duration, outcome (str) or passed (bool), nodeid (which is basically the full name).
Wrote my own plugin: https://github.com/david26694/pytest-slow-last
You can install it via:
pip install pytest-slow-last
and run
pytest --slow-last
The project has so many modules. There are functional test cases being written for almost every api written like for GET requests, POST requests and PUT requests. To test an individual file we use the syntact pytest tests/file_name.py
but I want to test a specific method in that file. Is there any way to test it like that??
Duplicate of Is there a way to specify which pytest tests to run from a file?
In a few words, you can use the -k option of pytest to specify the name of the test you would like to run.
I've got a test suite that is working well to test code across two separate databases (SQLite and Postgres). I want to extend this to run the exact same test suite across upgraded databases (to test that database schema upgrades are working as expected).
The upgrades to run are determined outside of pytest, from a shell script, based on information from Git, which determines what schema versions there are, compares against available upgrade scripts, and then should invoke pytest. I'd like to use something like:
pytest --dbupgrade=v1 --dbupgrade=v2 tests/test-upgrades.py
I have the following in conftest.py:
def pytest_addoption(parser):
parser.addoption(
"--dbupgrade",
action="append",
default=[],
help="list of base schema versions to upgrade"
)
And I've been using parametrized fixtures for the other tests. I already have all the test cases written and working and I would like to avoid rewriting them to be parametrized themselves as I've seen when searching for a solution using pytest_generate_tests. So where I could easily hardcode:
#pytest.fixture(params=['v1', 'v2'])
def myfixture(request):
...
I would like to do:
#pytest.fixture(params=pytest.config.option.get('dbupgrade')
def myfixture(request):
...
However the results from pytest_addoption are available in either the pytestconfig fixture, or in the config attribute attached to various objects, and I can't find a way to get it in the declaration of the fixture--but I believe it's available by that point.
Update (workaround)
I don't love it, but I'm pulling the necessary information from environment variables and that's working fine. Something like:
# for this case I prefer this to fail noisily if it fails
schema_versions = os.environ['SCHEMA_VERSIONS'].split(',')
...
#pytest.fixture(params=schema_versions)
def myfixture(request):
...
I'm trying to write a workaround for the inability of pytest/xdist to run some tests in serial, rather than all tests in parallel.
In order to do what I'm trying to do, I need to get a list of all the collected parameterized tests (so they look something like path/to/test_module_name.py::TestClassName::test_method_name[parameterization info]). I'm attempting to do so in a session scoped fixture, but can't figure out where this info is stored. Is there a way to do this?
I noticed at one point, when calling pytest with --cache-show, that 'cache/nodeids' was being populated with the exact node id information I need, but I can't seem to figure out when that does/doesn't happen, as it isn't consistent.
While I couldn't find exactly what I was looking for, the problem with serializing tests while using the xdist plugin can be resolved with the following two fixtures:
#pytest.fixture(scope='session')
def lock():
lock_file = pathlib.Path('serial.lock')
yield filelock.FileLock(lock_file=str(lock_file))
with contextlib.suppress(OSError):
os.remove(path=lock_file)
#pytest.fixture() # Add this fixture to each test that needs to be serialized
def serial(lock):
with lock.acquire(poll_intervall=0.1):
yield
I'm trying to understand what conftest.py files are meant to be used for.
In my (currently small) test suite I have one conftest.py file at the project root. I use it to define the fixtures that I inject into my tests.
I have two questions:
Is this the correct use of conftest.py? Does it have other uses?
Can I have more than one conftest.py file? When would I want to do that? Examples will be appreciated.
More generally, how would you define the purpose and correct use of conftest.py file(s) in a pytest test suite?
Is this the correct use of conftest.py?
Yes it is. Fixtures are a potential and common use of conftest.py. The
fixtures that you will define will be shared among all tests in your test suite. However, defining fixtures in the root conftest.py might be useless and it would slow down testing if such fixtures are not used by all tests.
Does it have other uses?
Yes it does.
Fixtures: Define fixtures for static data used by tests. This data can be accessed by all tests in the suite unless specified otherwise. This could be data as well as helpers of modules which will be passed to all tests.
External plugin loading: conftest.py is used to import external plugins or modules. By defining the following global variable, pytest will load the module and make it available for its test. Plugins are generally files defined in your project or other modules which might be needed in your tests. You can also load a set of predefined plugins as explained here.
pytest_plugins = "someapp.someplugin"
Hooks: You can specify hooks such as setup and teardown methods and much more to improve your tests. For a set of available hooks, read Hooks link. Example:
def pytest_runtest_setup(item):
""" called before ``pytest_runtest_call(item). """
#do some stuff`
Test root path: This is a bit of a hidden feature. By defining conftest.py in your root path, you will have pytest recognizing your application modules without specifying PYTHONPATH. In the background, py.test modifies your sys.path by including all submodules which are found from the root path.
Can I have more than one conftest.py file?
Yes you can and it is strongly recommended if your test structure is somewhat complex. conftest.py files have directory scope. Therefore, creating targeted fixtures and helpers is good practice.
When would I want to do that? Examples will be appreciated.
Several cases could fit:
Creating a set of tools or hooks for a particular group of tests.
root/mod/conftest.py
def pytest_runtest_setup(item):
print("I am mod")
#do some stuff
test root/mod2/test.py will NOT produce "I am mod"
Loading a set of fixtures for some tests but not for others.
root/mod/conftest.py
#pytest.fixture()
def fixture():
return "some stuff"
root/mod2/conftest.py
#pytest.fixture()
def fixture():
return "some other stuff"
root/mod2/test.py
def test(fixture):
print(fixture)
Will print "some other stuff".
Overriding hooks inherited from the root conftest.py.
root/mod/conftest.py
def pytest_runtest_setup(item):
print("I am mod")
#do some stuff
root/conftest.py
def pytest_runtest_setup(item):
print("I am root")
#do some stuff
By running any test inside root/mod, only "I am mod" is printed.
You can read more about conftest.py here.
EDIT:
What if I need plain-old helper functions to be called from a number
of tests in different modules - will they be available to me if I put
them in a conftest.py? Or should I simply put them in a helpers.py
module and import and use it in my test modules?
You can use conftest.py to define your helpers. However, you should follow common practice. Helpers can be used as fixtures at least in pytest. For example in my tests I have a mock redis helper which I inject into my tests this way.
root/helper/redis/redis.py
#pytest.fixture
def mock_redis():
return MockRedis()
root/tests/stuff/conftest.py
pytest_plugin="helper.redis.redis"
root/tests/stuff/test.py
def test(mock_redis):
print(mock_redis.get('stuff'))
This will be a test module that you can freely import in your tests. NOTE that you could potentially name redis.py as conftest.py if your module redis contains more tests. However, that practice is discouraged because of ambiguity.
If you want to use conftest.py, you can simply put that helper in your root conftest.py and inject it when needed.
root/tests/conftest.py
#pytest.fixture
def mock_redis():
return MockRedis()
root/tests/stuff/test.py
def test(mock_redis):
print(mock_redis.get(stuff))
Another thing you can do is to write an installable plugin. In that case your helper can be written anywhere but it needs to define an entry point to be installed in your and other potential test frameworks. See this.
If you don't want to use fixtures, you could of course define a simple helper and just use the plain old import wherever it is needed.
root/tests/helper/redis.py
class MockRedis():
# stuff
root/tests/stuff/test.py
from helper.redis import MockRedis
def test():
print(MockRedis().get(stuff))
However, here you might have problems with the path since the module is not in a child folder of the test. You should be able to overcome this (not tested) by adding an __init__.py to your helper
root/tests/helper/init.py
from .redis import MockRedis
Or simply adding the helper module to your PYTHONPATH.
In a wide meaning conftest.py is a local per-directory plugin. Here you define directory-specific hooks and fixtures. In my case a have a root directory containing project specific tests directories. Some common magic is stationed in 'root' conftest.py. Project specific - in their own ones. Can't see anything bad in storing fixtures in conftest.py unless they are not used widely (In that case I prefer to define them in test files directly)
I use the conftest.py file to define the fixtures that I inject into my tests, is this the correct use of conftest.py?
Yes, a fixture is usually used to get data ready for multiple tests.
Does it have other uses?
Yes, a fixture is a function that is run by pytest before, and sometimes
after, the actual test functions. The code in the fixture can do whatever you
want it to. For instance, a fixture can be used to get a data set for the tests to work on, or a fixture can also be used to get a system into a known state before running a test.
Can I have more than one conftest.py file? When would I want to do that?
First, it is possible to put fixtures into individual test files. However, to share fixtures among multiple test files, you need to use a conftest.py file somewhere centrally located for all of the tests. Fixtures can be shared by any test. They can be put in individual test files if you want the fixture to only be used by tests in that file.
Second, yes, you can have other conftest.py files in subdirectories of the top tests directory. If you do, fixtures defined in these lower-level conftest.py files will be available to tests in that directory and subdirectories.
Finally, putting fixtures in the conftest.py file at the test root will make them available in all test files.
Here are the official docs about using conftest.py to share fixtures:
conftest.py: sharing fixtures across multiple files
The conftest.py file serves as a means of providing fixtures for an entire directory. Fixtures defined in a conftest.py can be used by any test in that package without needing to import them (pytest will automatically discover them).
You can have multiple nested directories/packages containing your tests, and each directory can have its own conftest.py with its own fixtures, adding on to the ones provided by the conftest.py files in parent directories.