Reset MongoEngine properly between tests - python

I am wondering how to reset the MongoEngine state properly between tests or test files. It seems like dropping the entire database isn't enough. Given the following example:
import unittest
from mypackage.models.domain.user import User
from mongoengine import connect
class UsersTests(unittest.TestCase):
def setUp(self):
self.conn = connect('test_database')
def tearDown(self):
self.conn.drop_database('test_database')
# User.drop_collection() # Uncomment to get expected results
def test_something(self):
print User._collection
User(name='John H.', email='john#somedoamin.com').save()
def test_something_else(self):
print User._collection
User(name='John H.', email='john#somedoamin.com').save()
if __name__ == '__main__':
unittest.main()
if I run it with nose (or nose2)
$ nosetests --notests/models/user_tests.py
None
.Collection(Database(MongoClient(None, None), u'test_database'), u'users')
.
----------------------------------------------------------------------
Ran 2 tests in 0.685s
OK
Even if I run it multiple times, only the first User collection is set to None:
$ nosetests --nocapture tests/models/user_tests.py tests/models/user_tests.py
None
.Collection(Database(MongoClient(None, None), u'test_database'), u'users')
.Collection(Database(MongoClient(None, None), u'test_database'), u'users')
.Collection(Database(MongoClient(None, None), u'test_database'), u'users')
.
----------------------------------------------------------------------
Ran 4 tests in 1.382s
OK
This behavior causes some hard-to-find bugs. After looking at the source, it looks like Document class's drop_collection() is what I need for resetting the collection, but it's kind of weird to have to use it for all the collection in order to reset the MongoEngine state. Kind of wondering what the recommended way is. Any ideas would be super appreciated! Thanks!

Related

In pytest, how can I abort the fixture teardown?

Our pytest environment has a lot of fixtures (mostly scope='function' and scope='module') that are doing something of the form:
#pytest.yield_fixture(scope='function')
def some_fixture():
... some object initialization ...
yield some_object
... teardown ...
We use the teardown phase of the fixture (after the yield) to delete some resources created specifically for the test.
However, if a test is failing, I don't want the teardown to execute so we will have the resources still exist for further debugging.
For example, here is a common scenario that repeats in all of our testing framework:
#pytest.yield_fixture(scope='function')
def obj_fixture():
obj = SomeObj.create()
yield obj
obj.delete()
def test_obj_some_field(obj_fixture):
assert obj_fixture.some_field is True
In this case, if the condition in the assert is True I want the obj.delete() to execute.
However, if the test is failing, I want pytest to skip the obj.delete() and anything else after the yield.
Thank you.
EDIT
I want the process to be done without altering the fixture and the tests code, I prefer an automatic process instead of doing this refactor in our whole testing codebase.
There's an example in the pytest docs about how to do this. The basic idea is that you need to capture this information in a hook function and add it to the test item, which is available on the test request, which is available to fixtures/tests via the request fixture.
For you, it would look something like this:
# conftest.py
import pytest
#pytest.hookimpl(tryfirst = True, hookwrapper = True)
def pytest_runtest_makereport(item, call):
# execute all other hooks to obtain the report object
outcome = yield
rep = outcome.get_result()
# set a report attribute for each phase of a call, which can
# be "setup", "call", "teardown"
setattr(item, "rep_" + rep.when, rep)
# test_obj.py
import pytest
#pytest.fixture()
def obj(request):
obj = 'obj'
yield obj
# setup succeeded, but the test itself ("call") failed
if request.node.rep_setup.passed and request.node.rep_call.failed:
print(' dont kill obj here')
else:
print(' kill obj here')
def test_obj(obj):
assert obj == 'obj'
assert False # force the test to fail
If you run this with pytest -s (to not let pytest capture output from fixtures), you'll see output like
foobar.py::test_obj FAILED dont kill obj here
which indicates that we're hitting the right branch of the conditional.
Teardown is intended to be executed independently of whether a test passed or failed.
So I suggest to either write your teardown code such, that it is robust enough to be executed whether the test passed or failed or to add the cleanup to the end of your test, so that it will only be called if no preceding assert failed and if no exception occurred before
Set a class-level flag to indicate pass/fail and check that in your teardown. This is not tested, but should give you the idea::
#pytest.yield_fixture(scope='function')
def obj_fixture():
obj = SomeObj.create()
yield obj
if this.passed:
obj.delete()
def test_obj_some_field(obj_fixture):
assert obj_fixture.some_field is True
this.passed = True
I use a Makefile to execute pytest so had an additional tool at my disposal. I too needed the cleanup of fixtures to happen only on success. I added the cleanup as a second command to my test method in my Makefile.
clean:
find . | grep -E "(__pycache__|\.pyc|\.pyo)" | xargs rm -rf
-rm database.db # the minus here allows this to fail quietly
database:
python -m create_database
lint:
black .
flake8 .
test: clean lint database
pytest -x -p no:warnings
rm -rf tests/mock/fixture_dir

How to use unittest.TestSuite in VS Code?

In the future, I'll need to add many identical tests with different parameters. Now I am making a sample test suite:
import unittest
class TestCase(unittest.TestCase):
def __init__(self, methodName='runTest', param=None):
super(TestCase, self).__init__(methodName)
self.param = param
def test_something(self):
print '\n>>>>>> test_something: param =', self.param
self.assertEqual(1, 1)
if __name__ == "__main__":
suite = unittest.TestSuite()
testloader = unittest.TestLoader()
testnames = testloader.getTestCaseNames(TestCase)
for name in testnames:
suite.addTest(TestCase(name, param=42))
unittest.TextTestRunner(verbosity=2).run(suite)
It gets discovered by VS Code:
start
test.test_navigator.TestCase.test_something
When I run the tests, I don't receive the parameter:
test_something (test.test_navigator.TestCase) ...
>>>>>> test_something: param = None
ok
----------------------------------------------------------------------
Ran 1 test in 0.001s
OK
If I run this file directly, everything works as expected (note param = 42 part)
test_something (__main__.TestCase) ...
>>>>>> test_something: param = 42
ok
----------------------------------------------------------------------
Ran 1 test in 0.001s
OK
So it looks like VS Code is running the tests on its own just by using the discovered classes and ignoring TestSuite completely?
What am I doing wrong?
Thanks.
The problem is your code is in a if __name__ == "__main__" block which is only executed when you point Python directly at the file. So when the extension asks unittest to get all the tests and then run them for us it doesn't run the code in your if __name__ == "__main__" block (which is why it can find it but it doesn't do anything magical).
If you can get it to work using unittest's command-line interface then the extension should run it as you want it to.
The key is to implement the load_tests function:
def load_tests(loader, tests, pattern):
suite = unittest.TestSuite()
testnames = loader.getTestCaseNames(TestCase)
for name in testnames:
suite.addTest(TestCase(name, param=42))
suite.addTest(TestCase(name, param=84))
return suite
The documentation says:
If load_tests exists then discovery does not recurse into the package, load_tests is responsible for loading all tests in the package.
Now my tests run as expected.
P.S. Thanks to Brett Cannon for pointing me to Unit testing framework documentation

Mocks not getting hit in Python unit tests

I'm new to Python, but I've done quite a bit of unit testing in C# and JavaScript. I'm having trouble figuring out the mocking framework in Python. Here's what I have (trimmed down):
invoice_business.py
import ims.repository.invoice_repository as invoiceRepository
import logging
logger = logging.getLogger(__name__)
def update_invoice_statuses(invoices):
for invoice in invoices:
dbInvoice = invoiceRepository.get(invoice.invoice_id)
print("dbInvoice is %s" % dbInvoice) #prints <MagicMock etc.>
if dbInvoice is None:
logger.error("Unable to update status for invoice %d" % invoice.invoice_id)
continue;
test_invoice_business.py
from unittest import TestCase, mock
import logging
import ims.business.invoice_business as business
class UpdateInvoiceTests(TestCase):
#mock.patch("ims.business.invoice_business.invoiceRepository")
#mock.patch("ims.business.invoice_business.logger")
def test_invoiceDoesNotExist_logsErrorAndContinues(self, invoiceRepoMock, loggerMock):
#Arrange
invoice = Invoice(123)
invoice.set_status(InvoiceStatus.Filed, None)
invoiceRepoMock.get.return_value(33)
#Act
business.update_invoice_statuses([invoice])
#Assert
invoiceRepoMock.get.assert_called_once_with(123)
loggerMock.error.assert_called_once_with("Unable to update status for invoice 123")
The test fails with
AssertionError: Expected 'get' to be called once. Called 0 times.
The print statement in update_invoice_statuses gets hit, though, because I see the output of
dbInvoice is <MagicMock name='invoiceRepository.get()' id='xxxx'>
Any idea what I'm doing wrong here?
Edit: After #chepner's help, I ran into another assertion error and realized it was because I should be using invoiceRepoMock.get.return_value = None rather than .return_value(None)
The mock arguments to your test function are swapped. The inner decorator (for the logger) is applied first, so the mock logger should be the first argument to your method.
#mock.patch("ims.business.invoice_business.invoiceRepository")
#mock.patch("ims.business.invoice_business.logger")
def test_invoiceDoesNotExist_logsErrorAndContinues(self, loggerMock, invoiceRepoMock):
...

Run skipped tests by nosetest (no-skip doesn't work)

I have a lot of legacy test like this. Tests are skipped with unittest but are run with nosetests.
import unittest
import nose
import time
class BaseTestClass(unittest.TestCase):
#unittest.skip("Skipped")
def test_add(self):
"""Test 1"""
print "Execute test 1"
time.sleep(5)
def test_sub(self):
"""Test 2"""
print "Execute test 2"
time.sleep(5)
if __name__ == '__main__':
nose.run(argv=["nose", ".", "--verbosity=2", "--nocapture", '--no-skip'])
I want to run all skipped tests.
Looks like "no-skip" - option not works, become I have output
Execute test 2
Test 1 ... Test 2 ... ok
----------------------------------------------------------------------
Ran 2 tests in 5.001s
OK
looks like test is present in output but doesn't execute code inside.
I expected to see:
Test 1 ... ok
Execute test 1
Test 2 ... ok
Execute test 2
----------------------------------------------------------------------
Ran 2 tests in 10 s
OK
I got a similar problem, and after some research the issue seems to be that --no-skip does not have anything to do with unittests.skip but with the handling the exception nose.SkipTest.
I ended up with a hacky solution that does the job in my case. Maybe it can help somebody. It consists of using your own custom skip function so that it "piggybacks" the --no-skip flag and reacts accordingly.
import sys
import unittest
no_skip_mode = '--no-skip' in sys.argv
def skip(reason):
if no_skip_mode:
# Don't decorate the test function. Just return as is.
def dummy_wrapper(fn):
return fn
return dummy_wrapper
return unittest.skip(reason)
I guess you could try to monkeypatch unittest.skip.

Testing methods each with a different setup/teardown

I'm testing a class, with many test methods. However, each method has a unique context. I then write my code as following:
class TestSomeClass(unittest.TestCase):
def test_a():
with a_context() as c:
pass
def test_b():
with b_context() as c:
pass
def test_c():
with c_context() as c:
pass
However, the context managers are irrelevant to the test case, and produce temporary files. So as to not pollute the file system when the test fails, I would like to use each context manager in a setup/teardown scenario.
I've looked at nose's with_setup, but the docs say that is meant for functions only, not methods. Another way is to move the test methods to separate classes each with a setup/teardown function. What's a good way to do this?
First of all, I'm not sure why what you have isn't working. I wrote some test code, and it shows that the exit code always gets called, under the unittest.main() execution environment. (Note, I did not test nose, so maybe that's why I couldn't replicate your failure.) Maybe your context manager is broken?
Here's my test:
import unittest
import contextlib
import sys
#contextlib.contextmanager
def context_mgr():
print "setting up context"
try:
yield
finally:
print "tearing down context"
class TestSomeClass(unittest.TestCase):
def test_normal(self):
with context_mgr() as c:
print "normal task"
def test_raise(self):
with context_mgr() as c:
print "raise task"
raise RuntimeError
def test_exit(self):
with context_mgr() as c:
print "exit task"
sys.exit(1)
if __name__ == '__main__':
unittest.main()
By running that with $ python test_test.py I see tearing down context for all 3 tests.
Anyway, to answer your question, if you want a separate setup and teardown for each test, then you need to put each test in its own class. You can set up a parent class to do most of the work for you, so there isn't too much extra boilerplate:
class TestClassParent(unittest.TestCase):
context_guard = context_mgr()
def setUp(self):
#do common setup tasks here
self.c = self.context_guard.__enter__()
def tearDown(self):
#do common teardown tasks here
self.context_guard.__exit__(None,None,None)
class TestA(TestClassParent):
context_guard = context_mgr('A')
def test_normal(self):
print "task A"
class TestB(TestClassParent):
context_guard = context_mgr('B')
def test_normal(self):
print "task B"
This produces the output:
$ python test_test.py
setting up context: A
task A
tearing down context: A
.setting up context: B
task B
tearing down context: B
.
----------------------------------------------------------------------
Ran 2 tests in 0.000s
OK

Categories