Pytest unittest-style setup: module setup - python

Pytest documentation describes four ways to setup/teardown things:
module level setup/teardown
class level setup/teardown
method level setup/teardown
function level setup/teardown
But in one project it was implemented like this:
class TestClass:
def setup(self):
...
def test_1(self):
...
...
This setup method is called around each method invocation, just like setup_method from documentation (except that it doesn't take method as an argument). But I haven't seen it in the documentation or anywhere else. Why does it work?

Check this code
https://pytest.org/latest/_modules/_pytest/python.html
I would guess it's inheriting and using
def setup(self):

It's part of nose testing framework which is integrated in pytest. More information you can find here

Related

Python MagicMock mocks too much when using decorator

I am trying to get documentation to build on a ReadTheDocs installation via Sphinx. The classes I am documenting inherit from a larger Framework, which I cannot easily install and thus would like to mock. However, Mock seems to be overly greedy in mocking also the classes I would actually like to document. The code in question is as follows:
# imports that need to be mocked
from big.framework import a_class_decorator, ..., SomeBaseClass
#a_class_decorator("foo", "bar")
class ToDocument(SomeBaseClass):
""" Lots of nice documentation here
which will never appear
"""
def a_function_that_is_being_documented():
""" This will show up on RTD
"""
I hit the point where I make sure I don't blindly mock the decorator, but am instead explicit in my Sphinx conf.py. Otherwise, I follow RTD suggestions for mocking modules:
class MyMock(MagicMock):
#classmethod
def a_class_decorator(cls, classid, version):
def real_decorator(theClass):
print(theClass)
return theClass
return real_decorator
#classmethod
def __getattr__(cls, name):
return MagicMock()
sys.modules['big.framework'] = MyMock()
Now I would expect that for the printout I get something referring to my to be documented class, e.g. <ToDocument ...>.
However, I always get a Mock for that class as well, <MagicMock spec='str' id='139887652701072'>, which of course does not have any of the documentation I am trying to build. Any ideas?
Turns out the problem was inheritance from a mocked class. Begin explicit about the base class and creating an empty class
class SomeBaseClass:
pass
to patch in in the conf.py solved the problem

Writing Python test classes using Nose and Sqlalchemy with fixtures

I've been fighting with nose and fixtures today and can't help but feel I'm doing something wrong.
I had all my tests written as functions and everything was working OK, but I want to move them to test classes as I think it's tidier and the tests can then be inherited from.
I create a connection to the testdb, the transaction to use, and a number of fixtures at a package level for all tests. This connection is used by repository classes in the application to load test data from the DB.
I tried to create my test classes with the class they are to test as an attribute. This was set in the __init__ function of the test class.
The problem I had is the class I was testing needs to be instantiated with data from my test DB, but Nose creates an instance of the test class before the package setup function is run. This mean that there are no fixtures when the test classes are created and the tests fail.
The only way I can get it to work is by adding a module setup function that defines the class I want to test as a global, then use that global in the test methods, but that seems quite hacky. Can anyone suggest a better way?
That isn't very clear, so here is an example of my test module...
from nose.tools import *
from myproject import repositories
from myproject import MyClass
def setup():
global class_to_test
db_data_repo = repositories.DBDataRepository()
db_data = db_data_repo.find(1)
class_to_test = MyClass(db_data)
class TestMyClass(object):
def test_result_should_be_an_float(self):
result = class_to_test.do_something()
assert isinstance(result, float)
If you know a better way to achieve this then please share :) thanks!
If you are moving toward unittest.TestCase class based testing, you might as well use setUp()/tearDown() method that will run before each test method, and setUpClass() to setup up class specific things, and nose will handle it just as unittest will, something like this:
class TestMyClass(unittest.TestCase):
#classmethod
def setUpClass(cls):
db_data_repo = repositories.DBDataRepository()
db_data = db_data_repo.find(1)
cls.class_to_test = MyClass(db_data)
def test_result_should_be_an_float(self):
result = self.class_to_test.do_something()
assert isinstance(result, float)

Setup method for nosetest. (Test class)

I am mocking out a database in some tests that I am doing. How would I create a setup method for the entire class, such that it runs each time an individual test within the class runs?
Example of what I am attempting to do.
from mocks import MockDB
class DBTests(unittest.TestCase):
def setup(self):
self.mock_db = MockDB()
def test_one(self):
#deal with self.mock_db
def test_two(self):
#deal with self.mock_db, as if nothing from test_one has happened
I'm assuming a teardown method would also be possible, but I can't find documentation that will do something like this.
If you are using Python unit test framework something like this is what you want:
class Test(unittest.TestCase):
def setUp(self):
self.mock_db = MockDB()
def tearDown(self):
pass # clean up
def test_1(self):
pass # test stuff
Documentation
With Nose, subclassing of TestCase works the same way as standard unittest -- setUp/tearDown are the same. From the nose docs
Test classes
A test class is a class defined in a test module that matches
testMatch or is a subclass of unittest.TestCase. All test classes are
run the same way: Methods in the class that match testMatch are
discovered, and a test case is constructed to run each method with a
fresh instance of the test class. Like unittest.TestCase subclasses,
other test classes can define setUp and tearDown methods that will be
run before and after each test method. Test classes that do not
descend from unittest.TestCase may also include generator methods and
class-level fixtures. Class-level setup fixtures may be named
setup_class, setupClass, setUpClass, setupAll or setUpAll; teardown
fixtures may be named teardown_class, teardownClass, tearDownClass,
teardownAll or tearDownAll. Class-level setup and teardown fixtures
must be class methods.

TestCase.setUp() inside decorator

I am unhappy with TestCase.setUp(): If a test has a decorator, setUp() gets called outside the decorator.
I am not new to python, I can help myself, but I search a best practice solution.
TestCase.setUp() feels like a way to handle stuff before decorators where introduced in Python.
What is a clean solution to setup a test, if the setup should happen inside the decorator of the test method?
This would be a solution, but setUp() gets called twice:
class Test(unittest.TestCase):
def setUp(self):
...
#somedecorator
def test_foo(self):
self.setUp()
Example use case: somedecorator opens a database connection and it works like a with-statement: It can't be broken into two methods (setUp(), tearDown()).
Update
somedecorator is from a different application. I don't want to modify it. I could do some copy+paste, but that's not a good solution.
Ugly? Yes:
class Test(unittest.TestCase):
def actualSetUp(self):
...
def setUp(self):
if not any('test_foo' in i for i in inspect.stack()):
self.actualSetUp()
#somedecorator
def test_foo(self):
self.actualSetUp()
Works by inspecting the current stack and determining whether we are in any function called 'test_foo', so that should obviously be a unique name.
Probably cleaner to just create another TestCase class specifically for this test, where you can have no setUp method.

Python unittest: having an external function for repetitive code in different modules

Let's say I have a web service ServA written in python and I want to write some unit tests.
ServA does several things (with different views) but all the views produce a similar log of the requests.
These tests should check the logs of ServA in the different circumstances, so there is a lot of repeated code for these unit tests (the structure of the logs is always the same).
My idea is to write a generic function to avoid repetition of code and I have found this other question that solves the problem creating a generic method inside the unittest class.
But now what if I have another web service ServB and another set of tests and I need to do the same?
Is there a way to reuse the generic function?
Should I simply create a test class with the method to check the logs like this:
class MyMetaTestClass(unittest.TestCase):
def check_logs(self, log_list, **kwargs):
#several self.assertEqual
and then the tests for ServA and ServB inherit this class like this:
class TestServA(MyMetaTestClass):
def test_log1(self):
logs = extract_the_logs()
self.check_logs(logs, log_1=1, log2='foo')
is there another (better) way?
You can inherit from a common base class like you did, but the base class doesn't have to be a TestCase subclass itself - you can just make it a mixin class:
# testutils.py
class LogCheckerMixin(object):
""" this class adds log checking abilities to a TestCase.
"""
def check_logs(self, logs, **kw):
self.assertWhatever(something)
# myserver/tests.py
import unittest
from testutils import LogCheckerMixin
class MyServerTest(unittest.TestCase, LogCheckerMixin):
def test_log1(self):
logs = extract_the_logs()
self.check_logs(logs, log_1=1, log2='foo')
Or you can just make it a plain function and call it from your test:
# testutils.py
def check_logs(testcase, logs, **kw):
testcase.assertWhatever(something)
# myserver/tests.py
import unittest
from testutils import check_logs
class MyServerTest(unittest.TestCase):
def test_log1(self):
logs = extract_the_logs()
check_logs(self, logs, log_1=1, log2='foo')
It's rather opinion based, but what I've seen most often is a set of separate helper classes, which can be used by any tests suite.
I usually create the infrastructure folder/namespace/module/package (in the test project) not confuse tests which should concern only domain aspects with technical issues. In this case you could have something like this:
logHelper.py:
def assert_something_about_logs(logs, something):
# utility code
# do an assertion
def assert_something_else(logs):
# utility code
# do an assertion
etc.
This way your unit tests will read well, for example assert_that_logs_contain(errorMessage="Something went wrong"), and all technicalities are hidden not to obscure the test.

Categories