I've been fighting with nose and fixtures today and can't help but feel I'm doing something wrong.
I had all my tests written as functions and everything was working OK, but I want to move them to test classes as I think it's tidier and the tests can then be inherited from.
I create a connection to the testdb, the transaction to use, and a number of fixtures at a package level for all tests. This connection is used by repository classes in the application to load test data from the DB.
I tried to create my test classes with the class they are to test as an attribute. This was set in the __init__ function of the test class.
The problem I had is the class I was testing needs to be instantiated with data from my test DB, but Nose creates an instance of the test class before the package setup function is run. This mean that there are no fixtures when the test classes are created and the tests fail.
The only way I can get it to work is by adding a module setup function that defines the class I want to test as a global, then use that global in the test methods, but that seems quite hacky. Can anyone suggest a better way?
That isn't very clear, so here is an example of my test module...
from nose.tools import *
from myproject import repositories
from myproject import MyClass
def setup():
global class_to_test
db_data_repo = repositories.DBDataRepository()
db_data = db_data_repo.find(1)
class_to_test = MyClass(db_data)
class TestMyClass(object):
def test_result_should_be_an_float(self):
result = class_to_test.do_something()
assert isinstance(result, float)
If you know a better way to achieve this then please share :) thanks!
If you are moving toward unittest.TestCase class based testing, you might as well use setUp()/tearDown() method that will run before each test method, and setUpClass() to setup up class specific things, and nose will handle it just as unittest will, something like this:
class TestMyClass(unittest.TestCase):
#classmethod
def setUpClass(cls):
db_data_repo = repositories.DBDataRepository()
db_data = db_data_repo.find(1)
cls.class_to_test = MyClass(db_data)
def test_result_should_be_an_float(self):
result = self.class_to_test.do_something()
assert isinstance(result, float)
Related
I'm looking to write unit tests for some code that uses an object's __subclasses__() method. Ultimately, I was trying to use __subclassess__() to keep track of classes dynamically imported into my application through a namespace package.
In some cases, my code base has test classes created within a pytest file. My problem is that when I run pytest in bulk on my source directory I find failures or errors in my code due to import polutions from these one-off test classes. This is because the pytest run maintains all of the imports as it runs through the source tree. On their own the tests pass fine, but in a sequence they fail, sometimes depending on the order in which they ran.
In my current code branch these __subclasses__() invocations are in the application code, but I have moved them out to tests here to demonstrate with a MVE:
In my/example.py
class MyClass(object):
def __init__(self):
pass
class MySubClass(MyClass):
def __init__(self):
super().__init__()
In my/test_subclass.py
from my.example import MyClass
class TestSubClass(MyClass):
def __init__(self):
super().__init__()
def test_TestSubClass():
assert issubclass(TestSubClass, MyClass)
In my/test_subclasses.py
from my.example import MySubClass, MyClass
def test_find_subclasses():
assert all([cls == MySubClass for cls in MyClass.__subclasses__()])
The result, when run on my machine, is that the test_find_subclasses() test fails due to the discovery of the TestSubClass when running after test_subclass.py:
def test_find_subclasses():
> assert all([cls == MySubClass for cls in MyClass.__subclasses__()])
E assert False
E + where False = all([True, False])
What is the best way to maintain a "clean" state during sequenced pytest runs so that I can avoid mangling imports?
As discussed in the comments, you probably don't want to hard-code the types that may extend MyClass, since you really can't predict what your application will need in the future. If you want to check subclassing, just check that it works at all:
def test_find_subclasses():
assert MySubClass in MyClass.__subclasses__()
Even more concisely, you could simply do
def test_find_subclasses():
assert issubclass(MySubClass, MyClass)
That being said, you can technically filter the classes you are looking through. In your particular case, you have a distinctive naming convention, so you can do something like
def only_production_classes(iterable):
return [cls for cls in iterable if not cls.__name__.lower().startswith('test')]
def test_find_subclasses():
assert all([cls == MySubClass for cls in only_production_classes(MyClass.__subclasses__())])
You can define only_production_classes using other criteria, like the module that the class appears in:
def only_production_classes(iterable):
return [cls for cls in iterable if not cls.__module__.lower().startswith('test_')]
You can't easily unlink class objects that have been loaded, so your idea of a "clean" test environment is not quite feasible. But you do have the option of filtering the data that you work with based on where it was imported from.
I am trying to get documentation to build on a ReadTheDocs installation via Sphinx. The classes I am documenting inherit from a larger Framework, which I cannot easily install and thus would like to mock. However, Mock seems to be overly greedy in mocking also the classes I would actually like to document. The code in question is as follows:
# imports that need to be mocked
from big.framework import a_class_decorator, ..., SomeBaseClass
#a_class_decorator("foo", "bar")
class ToDocument(SomeBaseClass):
""" Lots of nice documentation here
which will never appear
"""
def a_function_that_is_being_documented():
""" This will show up on RTD
"""
I hit the point where I make sure I don't blindly mock the decorator, but am instead explicit in my Sphinx conf.py. Otherwise, I follow RTD suggestions for mocking modules:
class MyMock(MagicMock):
#classmethod
def a_class_decorator(cls, classid, version):
def real_decorator(theClass):
print(theClass)
return theClass
return real_decorator
#classmethod
def __getattr__(cls, name):
return MagicMock()
sys.modules['big.framework'] = MyMock()
Now I would expect that for the printout I get something referring to my to be documented class, e.g. <ToDocument ...>.
However, I always get a Mock for that class as well, <MagicMock spec='str' id='139887652701072'>, which of course does not have any of the documentation I am trying to build. Any ideas?
Turns out the problem was inheritance from a mocked class. Begin explicit about the base class and creating an empty class
class SomeBaseClass:
pass
to patch in in the conf.py solved the problem
I have a class with a dictionary that is used to cache response from server for a particular input. Since this is used for caching purpose, this is kept as a class variable.
class MyClass:
cache_dict = {}
def get_info_server(self, arg):
if arg not in self.cache_dict:
self.cache_dict[arg] = Client.get_from_server(arg)
return cache_dict[arg]
def do_something(self, arg):
# Do something based on get_info_server(arg)
And when writing unit tests, since the dictionary is a class variable, the values are cached across test cases.
Test Cases
# Assume that Client is mocked.
def test_caching():
m = MyClass()
m.get_info_server('foo')
m.get_info_server('foo')
mock_client.get_from_server.assert_called_with_once('foo')
def test_do_something():
m = MyClass()
mock_client.get_from_server.return_value = 'bar'
m.do_something('foo') # This internally calls get_info_server('foo')
If test_caching executes first, the cached value will be some mock object. If test_do_something executes first, then the assertion that the test case is called exactly once will fail.
How do I make the tests independent of each other, besides manipulating the dictionary directly (since this is like requiring intimate knowledge of the inner working of the code. what if the inner working were to change later. All I need to verify is the API itself, and not rely on the inner workings)?
You can't really avoid resetting your cache here. If you are unittesting this class, then your unittest will need to have an intimate knowledge of the inner workings of the class, so just reset the cache. You rarely can change how your class works without adjusting your unittests anyway.
If you feel that that still will create a maintenance burden, then make cache handling explicit by adding a class method:
class MyClass:
cache_dict = {}
#classmethod
def _clear_cache(cls):
# for testing only, hook to clear the class-level cache.
cls.cache_dict.clear()
Note that I still gave it a name with a leading underscore; this is not a method that a 3rd party should call, it is only there for tests. But now you have centralised clearing the cache, giving you control over how it is implemented.
If you are using the unittest framework to run your tests, clear the cache before each test in a TestCase.setUp() method. If you are using a different testing framework, that framework will have a similar hook. Clearing the cache before each test ensures that you always have a clean state.
Do take into account that your cache is not thread safe, if you are running tests in parallel with threading you'll have issues here. Since this also applies to the cache implementation itself, this is probably not something you are worried about right now.
You didn't put it in the question explicitly, but I'm assuming your test methods are in a subclass of unittest.TestCase called MyClassTests.
Explicitly set MyClass.cache_dict in the method under test. If it's just a dictionary, without any getters / setters for it, you don't need a Mock.
If you want to guarantee that every test method is independent, set MyClass.cache_dict = {} in MyClassTests.setup().
You need to make use of Python's built in UnitTest TestCase and implement setup and teardown methods.
If you define setUp() and tearDown() in your tests, these will execute each time one of the single test methods gets called (before and after, respectively)
Example:
# set up any global, consistent state here
# subclass unit test test case here.
def setUp(self):
# prepare your state if needed for each test, if this is not considered "fiddling", use this method to set your cache to a fresh state each time
your_cache_dict_variable = {}
### Your test methods here
def tearDown(self):
# this will handle resetting the state, as needed
Check out the docs for more info: https://docs.python.org/2/library/unittest.html
One thing I can suggest is to use setUp() and tearDown() methods in your test class.
from unittest import TestCase
class MyTest(TestCase):
def setUp(self):
self.m = MyClass()
//anything else you need to load before testing
def tearDown(self):
self.m = None
def test_caching(self):
self.m.get_info_server('foo')
self.m.get_info_server('foo')
mock_client.get_from_server.assert_called_with_once('foo')
I am mocking out a database in some tests that I am doing. How would I create a setup method for the entire class, such that it runs each time an individual test within the class runs?
Example of what I am attempting to do.
from mocks import MockDB
class DBTests(unittest.TestCase):
def setup(self):
self.mock_db = MockDB()
def test_one(self):
#deal with self.mock_db
def test_two(self):
#deal with self.mock_db, as if nothing from test_one has happened
I'm assuming a teardown method would also be possible, but I can't find documentation that will do something like this.
If you are using Python unit test framework something like this is what you want:
class Test(unittest.TestCase):
def setUp(self):
self.mock_db = MockDB()
def tearDown(self):
pass # clean up
def test_1(self):
pass # test stuff
Documentation
With Nose, subclassing of TestCase works the same way as standard unittest -- setUp/tearDown are the same. From the nose docs
Test classes
A test class is a class defined in a test module that matches
testMatch or is a subclass of unittest.TestCase. All test classes are
run the same way: Methods in the class that match testMatch are
discovered, and a test case is constructed to run each method with a
fresh instance of the test class. Like unittest.TestCase subclasses,
other test classes can define setUp and tearDown methods that will be
run before and after each test method. Test classes that do not
descend from unittest.TestCase may also include generator methods and
class-level fixtures. Class-level setup fixtures may be named
setup_class, setupClass, setUpClass, setupAll or setUpAll; teardown
fixtures may be named teardown_class, teardownClass, tearDownClass,
teardownAll or tearDownAll. Class-level setup and teardown fixtures
must be class methods.
Let's say I have a web service ServA written in python and I want to write some unit tests.
ServA does several things (with different views) but all the views produce a similar log of the requests.
These tests should check the logs of ServA in the different circumstances, so there is a lot of repeated code for these unit tests (the structure of the logs is always the same).
My idea is to write a generic function to avoid repetition of code and I have found this other question that solves the problem creating a generic method inside the unittest class.
But now what if I have another web service ServB and another set of tests and I need to do the same?
Is there a way to reuse the generic function?
Should I simply create a test class with the method to check the logs like this:
class MyMetaTestClass(unittest.TestCase):
def check_logs(self, log_list, **kwargs):
#several self.assertEqual
and then the tests for ServA and ServB inherit this class like this:
class TestServA(MyMetaTestClass):
def test_log1(self):
logs = extract_the_logs()
self.check_logs(logs, log_1=1, log2='foo')
is there another (better) way?
You can inherit from a common base class like you did, but the base class doesn't have to be a TestCase subclass itself - you can just make it a mixin class:
# testutils.py
class LogCheckerMixin(object):
""" this class adds log checking abilities to a TestCase.
"""
def check_logs(self, logs, **kw):
self.assertWhatever(something)
# myserver/tests.py
import unittest
from testutils import LogCheckerMixin
class MyServerTest(unittest.TestCase, LogCheckerMixin):
def test_log1(self):
logs = extract_the_logs()
self.check_logs(logs, log_1=1, log2='foo')
Or you can just make it a plain function and call it from your test:
# testutils.py
def check_logs(testcase, logs, **kw):
testcase.assertWhatever(something)
# myserver/tests.py
import unittest
from testutils import check_logs
class MyServerTest(unittest.TestCase):
def test_log1(self):
logs = extract_the_logs()
check_logs(self, logs, log_1=1, log2='foo')
It's rather opinion based, but what I've seen most often is a set of separate helper classes, which can be used by any tests suite.
I usually create the infrastructure folder/namespace/module/package (in the test project) not confuse tests which should concern only domain aspects with technical issues. In this case you could have something like this:
logHelper.py:
def assert_something_about_logs(logs, something):
# utility code
# do an assertion
def assert_something_else(logs):
# utility code
# do an assertion
etc.
This way your unit tests will read well, for example assert_that_logs_contain(errorMessage="Something went wrong"), and all technicalities are hidden not to obscure the test.