The baseline of all my tests is that there will always be a taxi with at least one passenger in it. I can easily achieve this setup with some basic fixtures:
from blah import Passenger, Taxi
#pytest.fixture
def passenger():
return Passenger()
#pytest.fixture
def taxi(passenger):
return Taxi(rear_seat=passenger)
Testing the baseline is straightforward:
def test_taxi_contains_passenger(taxi)
assert taxi.has_passenger()
My issue crops up when I start needing more complicated test setup. There will be scenarios where I'll need the taxi to have more than one passenger and scenarios where I'll need to define passenger attributes. For example:
def test_three_passengers_in_taxi(taxi)
assert taxi.has_passengers(3)
assert taxi.front_passenger_is_not_a_child()
I'm able to get around this problem by having specific fixtures for specific tests. For the above test, I would create the following fixture:
#pytest.fixture
def three_passenger_test_setup(taxi)
taxi.add_front_seat_passenger(Passenger(child=False))
taxi.add_rear_seat_passenger(Passenger())
return taxi
I can pass the above fixture into my test case and everything is dandy, but if I go down this route I might end up with a fixture for every test and it feels like there should be a more efficient way of doing this.
Is there a way to pass arguments to a fixture so that those arguments can be used in creating the object the fixture returns? Should I be parameterizing the test function? The fixture? Or am I wasting time and is a fixture per test the way to go?
We can do this by using a method that takes args within a fixture and return the method from the fixture.
let me show you an example
#pytest.fixture
def my_fixture():
def _method(a, b):
return a*b
return _method
def test_me(my_fixture):
result1 = my_fixture(2, 3)
assert result1 == 6
result2 = my_fixture(4, 5)
assert result2 == 20
Is there a way to pass arguments to a fixture so that those arguments
can be used in creating the object the fixture returns?
Should I be parameterizing the test function?
You can use test parametrization with indirect=True.
In the pytest docs: Apply indirect on particular arguments.
As displayed here: https://stackoverflow.com/a/33879151/3858507
The fixture?
Another option that might suit you is using some fixture that specifies the argument using parametrization:
#pytest.fixture(params=[3,4])
def number_of_passengers(request):
return request.param
and then accessing this fixture from the taxi and the test itself:
#pytest.fixture
def taxi(number_of_passengers):
return Taxi(rear_seat=Passenger() * number_of_passengers)
def test_three_passengers_in_taxi(taxi, number_of_passengers)
assert taxi.has_passengers(number_of_passengers)
assert taxi.front_passenger_is_not_a_child()
This way is good if your tests and asserts are very similar between the cases you have.
Or am I wasting time and is a fixture per test the way to go?
I'd say you definitely shouldn't create a fixture for every test function. For that, you can just put the setup inside the test. This is actually a viable alternative in the case that you have to make different asserts for different cases of the taxi.
And finally another possible pattern you can use is a taxi factory. While for the example you've presented its not quite useful, if multiple parameters are required and only some are changing you can create a fixture similar to the following:
from functools import partial
#pytest.fixture
def taxi_factory():
return partial(Taxi, 1, 2, 3)
That fixture is just a Python decorator.
#decorator
def function(args):
...
is fancy for
def function(args):
...
function = decorator(function)
So you just might be able to write your own decorator, wrapping up the function you want to decorate in whatever you need and the fixture:
def myFixture(parameter):
def wrapper(function):
def wrapped(*args, **kwargs):
return function(parameter, *args, **kwargs)
return wrapped
return pytest.fixture(wrapper)
#myFixture('foo')
def function(parameter, ...):
...
This will act like the fixture but will pass a value ('foo') as parameter to function.
TLDR; Use pytest.mark and the request fixture to access request.keywords
This is a very old question, but existing answers did not work for me, so here is my solution using pytest.mark
from blah import Passenger, Taxi
#pytest.fixture
def passenger():
return Passenger()
#pytest.fixture
def taxi(passenger, request):
if "taxi" in request.keywords:
kwargs = request.keywords["taxi"].kwargs
else:
kwargs = dict(rear_seat=passenger)
return Taxi(**kwargs)
# This allows testing the baseline as-is...
def test_taxi_contains_passenger(taxi)
assert taxi.has_passenger()
# and also using pytest.mark to pass whatever kwargs:
#pytest.mark.taxi(rear_seat=[Passenger()] * 3)
def test_three_passengers_in_taxi(taxi)
assert taxi.has_passengers(3)
assert taxi.front_passenger_is_not_a_child()
Related
Let's say I have a very simple logging decorator:
from functools import wraps
def my_decorator(func):
#wraps(func)
def wrapper(*args, **kwargs):
print(f"{func.__name__} ran with args: {args}, and kwargs: {kwargs}")
result = func(*args, **kwargs)
return result
return wrapper
I can add this decorator to every pytest unit test individually:
#my_decorator
def test_one():
assert True
#my_decorator
def test_two():
assert 1
How can I automatically add this decorator to every single pytest unit test so I don't have to add it manually? What if I want to add it to every unit test in a file? Or in a module?
My use case is to wrap every test function with a SQL profiler, so inefficient ORM code raises an error. Using a pytest fixture should work, but I have thousands of tests so it would be nice to apply the wrapper automatically instead of adding the fixture to every single test. Additionally, there may be a module or two I don't want to profile so being able to opt-in or opt-out an entire file or module would be helpful.
Provided you can move the logic into a fixture, as stated in the question, you can just use an auto-use fixture defined in the top-level conftest.py.
To add the possibility to opt out for some tests, you can define a marker that will be added to the tests that should not use the fixture, and check that marker in the fixture, e.g. something like this:
conftest.py
import pytest
def pytest_configure(config):
config.addinivalue_line(
"markers",
"no_profiling: mark test to not use sql profiling"
)
#pytest.fixture(autouse=True)
def sql_profiling(request):
if not request.node.get_closest_marker("no_profiling"):
# do the profiling
yield
test.py
import pytest
def test1():
pass # will use profiling
#pytest.mark.no_profiling
def test2():
pass # will not use profiling
As pointed out by #hoefling, you could also disable the fixture for a whole module by adding:
pytestmark = pytest.mark.no_profiling
in the module. That will add the marker to all contained tests.
I read somewhere on SO that you should not test the decorator but the functionality of the wrapped function. Nevertheless there might be a shorter way of testing whether a certain decorator is assigned to multiple functions.
I have this decorator:
def check_user(func):
"""Only allow admins to change the user_id in the annotated function.
Use as decorator: #check_user
"""
#wraps(func)
def wrapper(*args, **kwargs):
...
I have some tests to test the decorator function itself, e.g.:
def test_check_user(self):
"""Test the check user decorator if a non admin wants to overwrite the user."""
with pytest.raises(ValueError) as ex:
#check_user
def foo(login: str, userId: str):
return True
foo(login="Foo", userId="Bar")
assert ex.value.args[0] == "Only admin users may overwrite the userId."
Now I have about 20 FastAPI endpoints where I assigned this decorator.
I want to avoid to repeat the same tests (see example above and other tests) for each function.
So something like this would be great:
#pytest.mark.parametrize("method", ["foo", "bar", "gaga", ....])
def test_decorated_methods(self, method):
assert assigned(check_user, method) # or so
You should be able to parametrize test_check_user to check the same assert for every decorated function in one test. This is superior to just checking if a decorator is applied because it validates the actual functional requirement of preventing non-admins from changing userId.
Remember the goal of writing good tests is to protect you from your future self. While you currently feel you can infer a security feature from the presence of this decorator, can you be sure that this inference will always be true for the rest of the project's lifetime? Better to make sure that your security features actually behave as intended.
#pytest.mark.parametrize("method,args,kwargs", [(my_func, ["Foo"], {}),
(my_other_func, ["bar"], {})])
def test_cant_change_id_if_not_admin(func, args, kwargs):
kwargs["userId"]="Bar"
with pytest.raises(ValueError) as ex:
func(*args, **kwargs)
assert ex.value.args[0] == "Only admin users may overwrite the userId."
Similar questions were asked before:
How to skip or ignore python decorators
Bypassing a decorator for unit testing
functools.wraps creates a magic method called __wrapped__ in the decorator, which allows to access the wrapped function. When you have multiple decorators, though, this would return an inner decorator, not the original function.
How to avoid or bypass multiple decorators?
To bypass or avoid multiple decorators and access the inner most function, use this recursive method:
def unwrap(func):
if not hasattr(func, '__wrapped__'):
return func
return unwrap(func.__wrapped__)
In pytest, you can have this function in conftest.py and access throughout your tests with:
# conftest.py
import pytest
#pytest.fixture
def unwrap():
def unwrapper(func):
if not hasattr(func, '__wrapped__'):
return func
return unwrapper(func.__wrapped__)
yield unwrapper
# my_unit_test.py
from my_module import decorated_function
def test_my_function(unwrap):
decorated_function_unwrapped = unwrap(decorated_function)
assert decorated_function_unwrapped() == 'something'
Like this
#pytest.fixture
def myfixture():
# creates a complex object
def test_compare(myfixture,myfixture):
#compares
Is there a way to know which fixture am I working on?
The generated objects are different
Thanks
You are looking for factory pattern https://docs.pytest.org/en/latest/fixture.html#factories-as-fixtures
Here is SO question about advantages of fixture factories Why would a pytest factory as fixture be used over a factory function?
Answering your question
#pytest.fixture
def myfixture():
def _make_fixture():
# creates a complex object
return _make_fixture
def test_compare(myfixture):
data1 = myfixture()
data2 = myfixture()
#compares
Why you want to compare objects returned by fixture? Fixtures are not meant to be so.
As per the documentation,
The purpose of test fixtures is to provide a fixed baseline upon which tests can reliably and repeatedly execute.
If you want to compare returned objects, just make it as a function, not as a fixture.Like.
def myfixture():
# creates a complex object
def test_compare():
a=myfixture()
b=myfixture()
#compare a and b
Is it possible to prevent the execution of "function scoped" fixtures with autouse=True on specific marks only?
I have the following fixture set to autouse so that all outgoing requests are automatically mocked out:
#pytest.fixture(autouse=True)
def no_requests(monkeypatch):
monkeypatch.setattr("requests.sessions.Session.request", MagicMock())
But I have a mark called endtoend that I use to define a series of tests that are allowed to make external requests for more robust end to end testing. I would like to inject no_requests in all tests (the vast majority), but not in tests like the following:
#pytest.mark.endtoend
def test_api_returns_ok():
assert make_request().status_code == 200
Is this possible?
You can also use the request object in your fixture to check the markers used on the test, and don't do anything if a specific marker is set:
import pytest
#pytest.fixture(autouse=True)
def autofixt(request):
if 'noautofixt' in request.keywords:
return
print("patching stuff")
def test1():
pass
#pytest.mark.noautofixt
def test2():
pass
Output with -vs:
x.py::test1 patching stuff
PASSED
x.py::test2 PASSED
In case you have your endtoend tests in specific modules or classes you could also just override the no_requests fixture locally, for example assuming you group all your integration tests in a file called end_to_end.py:
# test_end_to_end.py
#pytest.fixture(autouse=True)
def no_requests():
return
def test_api_returns_ok():
# Should make a real request.
assert make_request().status_code == 200
I wasn't able to find a way to disable fixtures with autouse=True, but I did find a way to revert the changes made in my no_requests fixture. monkeypatch has a method undo that reverts all patches made on the stack, so I was able to call it in my endtoend tests like so:
#pytest.mark.endtoend
def test_api_returns_ok(monkeypatch):
monkeypatch.undo()
assert make_request().status_code == 200
It would be difficult and probably not possible to cancel or change the autouse
You can't canel an autouse, being as it's autouse. Maybe you could do something to change the autouse fixture based on a mark's condition. But this would be hackish and difficult.
possibly with:
import pytest
from _pytest.mark import MarkInfo
I couldn't find a way to do this, but maybe the #pytest.fixture(autouse=True) could get the MarkInfo and if it came back 'endtoend' the fixture wouldn't set the attribute. But you would also have to set a condition in the fixture parameters.
i.e.: #pytest.fixture(True=MarkInfo, autouse=True). Something like that. But I couldn't find a way.
It's recommended that you organize tests to prevent this
You could just separate the no_requests from the endtoend tests by either:
limit the scope of your autouse fixture
put the no_requests into a class
Not make it an auto use, just pass it into the params of each def you need it
Like so:
class NoRequests:
#pytest.fixture(scope='module', autouse=True)
def no_requests(monkeypatch):
monkeypatch.setattr("requests.sessions.Session.request", MagicMock())
def test_no_request1(self):
# do stuff here
# and so on
This is good practice. Maybe a different organization could help
But in your case, it's probably easiest to monkeypatch.undo()