I am going through pytest fixtures, and the following looks pretty similar, latest works pretty similar.
Yes, the readability is better in yield_fixure, however could someone let me know what exactly is the difference.
which should I use, in cases like mentioned below?
#pytest.fixture()
def open_browser(request):
print("Browser opened")
def close_browser():
print("browser closed")
request.addfinalizer(close_browser)
return "browser object"
#pytest.yield_fixture()
def open_browser():
print("Browser opened")
yield "browser object"
print("browser closed")
def test_google_search(open_browser):
print(open_browser)
print("test_google_search")
The only difference is in readability. I think (though I'm not 100% sure) the underlying behavior is identical (i.e. the cleanup after the yield statement is run as a finalizer). I always prefer using yield fixtures for cleanup, since it's more readable.
If you're using pytest <3.0, you'll still need to use pytest.yield_fixture to get that behavior. But if you're able to use pytest 3.0+, pytest.yield_fixture is deprecated and you can use pytest.fixture to get the same yield_fixture behavior.
Here are the explanatory docs:
Since pytest-3.0, fixtures using the normal fixture decorator can use
a yield statement to provide fixture values and execute teardown code,
exactly like yield_fixture in previous versions.
Marking functions as yield_fixture is still supported, but deprecated
and should not be used in new code.
addfinalizer has two key differences over yield:
It is possible to register multiple finalizer functions.
Finalizers will always be called regardless if the fixture setup
code raises an exception. This is handy to properly close all
resources created by a fixture even if one of them fails to be
created/acquired
From the pytest docs
Related
I have a test framework that is performing multiple asserts and catching them (inherited someone else's code). The proprietary results report is correct, but as you may have guessed, pytest will mark these as passed:
test_something.py::TestSomething::test_that_should_fail PASSED
I've added an autouse fixture as such:
#pytest.fixture(autouse=True)
def run_after_tests(self):
yield
if self.actually_failed():
pytest.fail("Yay, failing when failures occur is cool!")
This solution works okay, except that it seems like the clean up happens after the test has already been marked as PASSED and a duplicate test is shown with an error.
Now pytest results look like this:
test_something.py::TestSomething::test_that_should_fail PASSED
test_something.py::TestSomething::test_that_should_fail ERROR
Is there a way to delay the evaluation of the test so it doesn't say it has passed?
I know this is a really stupid way of doing things and performing test evaluation at cleanup is not recommended, but there are too many tests that have been written this way and spending weeks to refactor the test is not feasible.
An alternative solution I've thought of is to write a decorator and then sed all the test cases and add it to the functions; but this is going to be my plan B if fixtures can't solve this.
Thanks!
As you've discovered, a fixture is a separate object from the test itself.
You'll need to modify the appropriate pytest hook. I haven't personally tested, but I believe placing the following code into your projects conftest.py will give you your desired result.
def check_for_failure(output) -> bool:
# define me
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_call(item):
output = yield
if check_for_failure(output):
pytest.fail()
I am running tests in two modes: with bare pytest and with pytest-xdist.
I have a heavy fixture that was defined with module scope. Inside this fixture, I have some optimization for the case when I am running tests with xdist:
#pytest.fixture(scope="module")
def myfixture(request):
if running_with_pytest:
pass
else:
pass
It works fine, but I also want to change fixture's scope to session(if I run tests with xdist).
How can I do this?
I gave a really good try to command line options, but it seems that fixture param scope only accepts string value, can't take callable function:
https://github.com/pytest-dev/pytest/issues/1682
I guess just need to wait until the issue is resolved.
But if you figure this one out please share your solution.
How can I find out from the current test if its the last to be run? (Python unittest / nosetests)
I have some specific fixture teardown to be done at the very end of the test run and it would be a lot easier if on a test by test basis I could just determine:
if last_test:
hard_fixture_teardown()
else:
soft_fixture_teardown()
I have a package teardown which would work perfectly but it seems very messy passing the fixture information back to the __init__.teardown_package().
You can use a combination of TestCase.tearDown() and TestCase.tearDownClass() to achieve this. tearDown() is called for each test method while tearDownClass() is called after all tests in the class have run.
As stated here unit test are not meant to have an order, unit tests depending on the order are either not well conceived or just have to be merged in a monolithic one. (merging separate functions in a single test is the accepted answer )
[edit after comment]
If the order is not important you can do this (quite messy, imo we are still forcing the boundaries of how unit tests should be used)
in every test package you put:
def tearDownModule():
xxx = int(os.getenv('XXX', '0')) + 1
if xxx == NUMBER_OF_TEST_PACKAGES:
print "hard tear down"
else:
print "not yet"
os.environ['XXX'] = str(xxx)
with NUMBER_OF_TEST_PACKAGES imported from somewhere global.
also if the order is not important I suppose that when used the fixture is only readed and not modified, if so you can setup that as a class method
#classmethod
def setUpClass(cls):
print "I'll take a lot of time here"
I would like to handle a NameError exception by injecting the desired missing variable into the frame and then continue the execution from last attempted instruction.
The following pseudo-code should illustrate my needs.
def function():
return missing_var
try:
print function()
except NameError:
frame = inspect.trace()[-1][0]
# inject missing variable
frame.f_globals["missing_var"] = ...
# continue frame execution from last attempted instruction
exec frame.f_code from frame.f_lasti
Read the whole unittest on repl.it
Notes
As pointed out by ivan_pozdeev in his answer, this is known as resumption.
After more research, I found Veedrac's answer to the question Resuming program at line number in the context before an exception using a custom sys.excepthook posted by lc2817 very interesting. It relies on Richie Hindle's work.
Background
The code runs in a slave process, which is controlled by a parent. Tasks (functions really) are written in the parent and latter passed to the slave using dill. I expect some tasks (running in the slave process) to try to access variables from outer scopes in the parent and I'd like the slave to request those variables to the parent on the fly.
p.s.: I don't expect this magic to run in a production environment.
On the contrary to what various commenters are saying, "resume-on-error" exception handling is possible in Python. The library fuckit.py implements said strategy. It steamrollers errors by rewriting the source code of your module at import time, inserting try...except blocks around every statement and swallowing all exceptions. So perhaps you could try a similar sort of tactic?
It goes without saying: that library is intended as a joke. Don't ever use it in production code.
You mentioned that your use case is to trap references to missing names. Have you thought about using metaprogramming to run your code in the context of a "smart" namespace such as a defaultdict? (This is perhaps only marginally less of a bad idea than fuckit.py.)
from collections import defaultdict
class NoMissingNamesMeta(type):
#classmethod
def __prepare__(meta, name, bases):
return defaultdict(lambda: "foo")
class MyClass(metaclass=NoMissingNamesMeta):
x = y + "bar" # y doesn't exist
>>> MyClass.x
'foobar'
NoMissingNamesMeta is a metaclass - a language construct for customising the behaviour of the class statement. Here we're using the __prepare__ method to customise the dictionary which will be used as the class's namespace during creation of the class. Thus, because we're using a defaultdict instead of a regular dictionary, a class whose metaclass is NoMissingNamesMeta will never get a NameError. Any names referred to during the creation of the class will be auto-initialised to "foo".
This approach is similar to #AndréFratelli's idea of manually requesting the lazily-initialised data from a Scope object. In production I'd do that, not this. The metaclass version requires less typing to write the client code, but at the expense of a lot more magic. (Imagine yourself debugging this code in two years, trying to understand why non-existent variables are dynamically being brought into scope!)
The "resumption" exception handling technique has proven to be problematic, that's why it's missing from C++ and later languages.
Your best bet is to use a while loop to not resume where the exception was thrown but rather repeat from a predetermined place:
while True:
try:
do_something()
except NameError as e:
handle_error()
else:
break
You really can't unwind the stack after an exception is thrown, so you'd have to deal with the issue before hand. If your requirement is to generate these variables on the fly (which wouldn't be recommended, but you seem to understand that), then you'd have to actually request them. You can implement a mechanism for that (such as having a global custom Scope class instance and overriding __getitem__, or using something like the __dir__ function), but not as you are asking for it.
I want to make sure I got down how to create tasklets and asyncrounous methods. What I have is a method that returns a list. I want it to be called from somewhere, and immediatly allow other calls to be made. So I have this:
future_1 = get_updates_for_user(userKey, aDate)
future_2 = get_updates_for_user(anotherUserKey, aDate)
somelist.extend(future_1)
somelist.extend(future_2)
....
#ndb.tasklet
def get_updates_for_user(userKey, lastSyncDate):
noteQuery = ndb.GqlQuery('SELECT * FROM Comments WHERE ANCESTOR IS :1 AND modifiedDate > :2', userKey, lastSyncDate)
note_list = list()
qit = noteQuery.iter()
while (yield qit.has_next_async()):
note = qit.next()
noteDic = note.to_dict()
note_list.append(noteDic)
raise ndb.Return(note_list)
Is this code doing what I'd expect it to do? Namely, will the two calls run asynchronously? Am I using futures correctly?
Edit: Well after testing, the code does produce the desired results. I'm a newbie to Python - what are some ways to test to see if the methods are running async?
It's pretty hard to verify for yourself that the methods are running concurrently -- you'd have to put copious logging in. Also in the dev appserver it'll be even harder as it doesn't really run RPCs in parallel.
Your code looks okay, it uses yield in the right place.
My only recommendation is to name your function get_updates_for_user_async() -- that matches the convention NDB itself uses and is a hint to the reader of your code that the function returns a Future and should be yielded to get the actual result.
An alternative way to do this is to use the map_async() method on the Query object; it would let you write a callback that just contains the to_dict() call:
#ndb.tasklet
def get_updates_for_user_async(userKey, lastSyncDate):
noteQuery = ndb.gql('...')
note_list = yield noteQuery.map_async(lambda note: note.to_dict())
raise ndb.Return(note_list)
Advanced tip: you can simplify this even more by dropping the #ndb.tasklet decorator and just returning the Future returned by map_async():
def get_updates_for_user_Async(userKey, lastSyncDate):
noteQuery = ndb.gql('...')
return noteQuery.map_async(lambda note: note.to_dict())
This is a general slight optimization for async functions that contain only one yield and immediately return the value yielded. (If you don't immediately get this you're in good company, and it runs the risk to be broken by a future maintainer who doesn't either. :-)