Are unittest base classes good practice? (python/webapp2) - python

I'm rather new to unit-testing and am trying to feel out the best practices for the thing. I've seen several questions on here relating to unit-test inheriting a base class that itself contains several tests, for example:
class TestBase(unittest.TestCase):
# some standard tests
class AnotherTest(TestBase):
# run some more tests in addition to the standard tests
I think what I've gathered from the community is that it's a better idea to write separate tests for each implementation and use multiple inheritance. But what if that base class actually doesn't contain any tests - just helpers for all your other tests. For example, let's say I've got some base test class which I've used to store some common methods that most if not all of my other tests will use. Let's also assume that I've got a database model in models.py called ContentModel
test_base.py
import webtest
from google.appengine.ext import testbed
from models import ContentModel
class TestBase(unittest.TestCase):
def setUp(self):
self.ContentModel = ContentModel
self.testbed = testbed.Testbed()
self.testbed.activate()
# other useful stuff
def tearDown(self):
self.testbed.deactivate()
def createUser(self, admin=False):
# create a user that may or may not be an admin
# possibly other useful things
It seems this would save me tons of time on all other tests:
another_test.py
from test_base import TestBase
class AnotherTest(TestBase):
def test_something_authorized(self):
self.createUser(admin=True)
# run a test
def test_something_unauthorized(self):
self.createUser(admin=False)
# run a test
def test_some_interaction_with_the_content_model(self):
new_instance = self.ContentModel('foo' = 'bar').put()
# run a test
Note: this is based on some of my work in webapp2 on google app engine, but I expect that
an analogous situation arises for pretty much any python web application
My Question
Is it good practice to use a base/helper class that contains useful methods/variables which all your other tests inherit, or should each test class be "self contained"?
Thanks!

Superb question. I think that almost anything you do that automates testing is excellent. That said, the tests really serve as the only reliable source of documentation. So the tests should be very easy to read and comprehend. The tests are reliable, unlike comments, because they show what the software really does and how to use it.
I like this approach. But you might also try out nose. Nose is a bit "lighter weight" to set up, and is well supported if you go the continuous integration route with something like Jenkins for automated build/test/deployment. Nose does not format its messages quite as nicely as the xUnit style (IMO, of course). But for many things, you might be willing to give that up.
BTW. Python is not Java. So it is perfectly acceptable to reuse just a plain old python function for re-use.

A base class is a good option for some uses - as long as you don't test anything in the base class. I use base classes all the time.
Also, think of the value of seeing the code in your test class. A good example is a base class I use all the time (in c#.NET): I use a SDK - ArcObjects from Esri - that requires a license. In normal execution this is handled elsewhere, but in testing, I have to check out (or activate) a license before I can use the objects in the library. This has absolutely nothing to do with the functionality of the code I am testing in the test class, but is required to make the tests run. Thus, I decided to tuck this functionality away in a base class that check s out a license before a test and checks it back in after. Tests that requires a licence are simply inherriting from this base class.
Finally, be very careful about where you setup and tear down the prerequisites for the test. It can get messy if some is done in the base class and others are done in the child class.

Related

Monkeypatch persisting across unit tests python

I have a custom framework which runs different code for different clients. I have monkeypatched certain methods in order to customize functionality for a client.
Here is the pattern simplified:
#import monkeypatches here
if self.config['client'] == 'cool_dudes':
from app.monkeypatches import Stuff
if self.config['client'] == 'cool_dudettes':
from app.monkeypatches import OtherStuff
Here is an example patch:
from app.framework.stuff import Stuff
def function_override(self):
return pass
Stuff.function = function_override
This works fine when the program executes as it is executed in a batch manner, spinning up from scratch every time. However, when running across unit tests, I find that the monkey patches persist across tests, causing unexpected behavior.
I realize that it would be far better to use an object oriented inheritance approach to these overrides, but I inherited this codebase and am not currently empowered to rearchitect it to that degree.
Barring properly re-architecting the program, how can I prevent these monkey patches from persisting across unit tests?
The modules, including app.framework.<whatever>, are not reloaded for every test. So, any changes in them you make persist. The same happens if your module is stateful (that's one of the reasons why global state is not such a good idea, you should rather keep state in objects).
Your options are to:
undo the monkey-patches when needed, or
change them into something more generic that would change (semi-)automatically depending on the test running, or
(preferred) Do not reinvent the wheel and use an existing, manageable, time-proven solution for your task (or at least, base your work on one if it doesn't meet your requirements completely). E.g. if you use them for mocking, see How can one mock/stub python module like urllib . Among the suggestions there is #mock.patch that does the patching for a specific test and undoes it upon its completion.
Anyone coming here looking for information about monkeypatching, might want to have a look at pytest's monkeypatch fixture. It avoids the problem of the OP by automatically undoing all modifications after the test function has finished.

Is having a unit test that is mostly mock verification a smell?

I have a class that connects three other services, specifically designed to make implementing the other services more modular, but the bulk of my unit test logic is in mock verification. Is there a way to redesign to avoid this?
Python example:
class Input(object): pass
class Output(object): pass
class Finder(object): pass
class Correlator(object): pass
def __init__(self, input, output, finder):
pass
def run():
finder.find(input.GetRows())
output.print(finder)
I then have to mock input, output and finder. Even if I do make another abstraction and return something from Correlator.run(), it will still have to be tested as a mock.
Just ask yourself: what exactly do you need to check in this particular test case? If this check does not rely on other classes not being dummy, then you are OK.
However, a lot of mocks is usually a smell in sense that you are probably trying to test integration without actually doing integration. So if you assume that if the class passes test with mocks, it will be fine working with real classes, than yes, you have to write some more tests.
Personally, I don't write many Unit tests at all. I'm web developer and I prefer functional tests, that test the whole application via HTTP requests, as users would. Your case may be different
There's no reason to only use unit test - Maybe integration tests would be more useful for this case. Initialize all the objects properly, use the main class a bit, and assert on the (possibly complex) results. That way you'll test interfaces, output predictability, and other things which are important further up the stack. I've used this before, and found that something which is difficult to integration test probably has too many attributes / parameters or too complicated/wrongly formatted output.
On a quick glance, this do look like the level of mocking becomes to large. If you're on a dynamic language (I'm assuming yes here since your example is in Python), I'd try to construct either subclasses of the production classes with the most problematic methods overridden and presenting mocked data, so you'd get a mix of production and mocked code. If your code path doesn't allow for instantiating the objects, I'd try monkey patching in replacement methods returning mock data.
Weather or not this is code smell also depends on the quality of mocked data. Dropping into a debugger and copy-pasting known correct data or sniffing it from the network is in my experience the preferred way of ensuring that.
Integration vs unit testing is also an economical question: how painful is it to replace unit tests with integration/functional tests? The larger the scale of your system, the more there is to gain with light-weight mocking, and hence, unit tests.

unittest tests reuse for family of classes

I have problem organizing my unittest based class test for family of tests. For example assume I implement a "dictionary" interface, and have 5 different implementations want to testing.
I do write one test class that tests a dictionary interface. But how can I nicely reuse it to test my all classes? So far I do ugly:
DictType = hashtable.HashDict
In top of file and then use DictType in test class. To test another class I manually change the DictType to something else.
How can do this otherwise? Can't pass arguments to unittest classes so is there a nicer way?
The way I tackle this with standard unittest is by subclassing -- overriding data is as easy as overriding methods, after all.
So, I have a base class for the tests:
class MappingTestBase(unittest.TestCase):
dictype = None
# write all the tests using self.dictype
and subclasses:
class HashtableTest(MappingTestBase):
dictype = hashtable.HashDict
class OtherMappingTest(MappingTestBase):
dictype = othermodule.mappingimpl
Here, the subclasses need override only dictype. Sometimes it's handier to also expose MappingTestBase use "hook methods". When the types being tested don't have exactly identical interfaces in all cases, this can be worked around by having the subclasses override the hook methods as needed -- this is the "Template Method" design pattern, see e.g. this page which has a commented and timelined collection of a couple of video lectures of mine on design patterns -- part II is about Template Method and variants thereof for about the first 30 minutes.
You don't have to have all of this in a single module, of course (though I often find it clearest to lay things out this way, you could also make one separate test module per type you're testing, each importing the module with the abstract base class).
You could look at testscenarios which allows you to set a list called scenarios. The code then generates a version of the test class for each value/scenario in the list
See the example
So in your case the scenarios would be a list like [ {dicttype:hashtable.HashDict}, {dicttype:otherimpl.class}, ] and use self.dicttype in your test code.

What approach(es) have you used for lightweight Python unit-tests on App Engine?

I'm about to embark on some large Python-based App Engine projects, and I think I should check with Stack Overflow's "wisdom of crowds" before committing to a unit-testing strategy. I have an existing unit-testing framework (based on unittest with custom runners and extensions) that I want to use, so anything "heavy-weight"/"intrusive" such as nose, webtest, or gaeunit doesn't seem appropriate. The crucial unit tests in my worldview are extremely lightweight and fast ones, ones that run in an extremely short time, so I can keep running them over and over all the time without breaking my development rhythm (e.g., for a different project, I get 97% or so coverage for a 20K-lines project with several dozens of super-fast tests that take 5-7 seconds, elapsed time, for a typical run, overall -- that's what I consider a decent suite of small, fast unit-tests). I'll have richer/heavier tests as well of course, all the way to integration tests with selenium or windmill, that's not what I'm asking about;-) -- my focus in this question (and in most of my development endeavors;-) is on the small, lightweight unit-tests that lightly and super-rapidly cover my code, not on the deeper ones.
So I think what I need is essentially a set of small, very lightweight simulations of the various key App Engine subsystems -- data store, memcache, request/response objects and calls to webapp handlers, user handling, mail, &c, roughly in this order of priority. I haven't found exactly what I'm looking for, so it seems to me that I should either rely on mox, as I've done often in the past, which basically means mocking each subsystem used in a given test and setting up all expectations &c (strong, but lots of work each time, and very sensitive to the tested-code's internals, i.e. very "white-box"y), or rolling my own simulation of each subsystem (and doing asserts on the simulated subsystems' states as part of the unit tests). The latters seems feasible, given GAE's Python-side strong "stubs" architecture... but I can't believe I need to roll my own, i.e., that nobody's already written such simple-minded simulators!-) E.g., for the datastore, it looks like what I need is more or less the "datastore on file" stub that's already part of the SDK, plus a way to mark it readonly and easy-to-use accessors for assertions about the datastore's state; and so forth, subsystem by subsystem -- each seems to need "just a bit more" than what's already in the SDK, "perched on top" of the existing "stubs" architecture.
So, before diving in and spending a day or two of precious development time "rolling my own" simulations of GAE subsystems for unit testing purposes, I thought I'd double check with the SO crowd and see what y'all think of this... or, if there's already some existing open source set of such simulators that I can simply reuse (or minimally tweak!-), and which I've just failed to spot in my searching!-)
Edit: to clarify, if I do roll my own, I do plan to leverage the SDK-supplied stubs where feasible; but for example there's no stub for a datastore that gets initially read in from a file but then not saved at the end, so I need to subclass and tweak the existing one (which also doesn't offer particularly convenient ways to do asserts on its state -- same for the mail service stub, etc). That's what I mean by "rolling my own" -- not "rewriting from scratch"!-)
Edit: "why not GAEUnit" -- GAEUnit is nice for its own use cases, but running dev_appserver and seeing results in my browser (or even via urllib.urlopen) is definitely not what I'm after -- I want to use a fully automated setup, suitable for running within an existing test-running framework which is based on extending unittest, and no HTTP in the way (said framework defines a "fast" test as one that among other thing does no sockets and minimal disk I/O -- we simulate or mock these -- so via gaeunit I could do no better than "medium" tests) + no convenient way to prepopulate datastore for each test (and no OO structure to help customize things).
You don't need to write your own stubs - the SDK includes them, since they're what it uses to emulate the production APIs. Not all of them are suitable for use in unit-tests, but most are. Check out this code for an example of the setup/teardown code you need to make use of the built in stubs.
NoseGAE is a nose plugin that support unittests by automatically setting up the development environment and a test datastore for you. Very useful when developing on dev_appserver.
I use GAEUnit for my Google App Engine App and I am quite happy with the speed of the tests. The thing that I like about GAEUnit,and I am sure Webtest does it, is that it creates its own version for stubs of everything for testing leaving your "live" versions alone for testing.
So your datastore that you may be using for development will be left as is when you run your GAETests.
I might also add that Fixture has been very useful in my unit tests. It lets you create models in a declarative syntax, which it converts into stored entities that you can load in your tests. This way you have the same data set at the beginning of every test case!, which saves you from having to create data by hand at the start of every test. Here is an example, from the Fixture documentation:
Given this model:
from google.appengine.ext import db
class Entry(db.Model):
title = db.StringProperty()
body = db.TextProperty()
added_on = db.DateTimeProperty(auto_now_add=True)
Your fixture would look like this:
from fixture import DataSet
class EntryData(DataSet):
class great_monday:
title = "Monday Was Great"
body = """\
Monday was the best day ever.
"""
Note however, that I ran into the following issues:
1. This bug, but the included patch does remedy it.
2. The datastore is not -by default- reset between test cases. So I use this to force a reset for each test case:
class TycoonTest(unittest.TestCase):
def setUp(self):
# Clear out the datastore before starting the test.
apiproxy_stub_map.apiproxy._APIProxyStubMap__stub_map['datastore_v3'].Clear()
self.data = self.load_data()
self.data.setup()
os.environ['SERVER_NAME'] = "dev_appserver"
self.after_setUp()
def load_data(self):
return datafixture.data(*dset.__all__)
def after_setUp(self):
""" After setup
"""
pass
def tearDown(self):
# Teardown data.
try:
self.data.teardown()
except:
pass
The SDK 1.4.3 Testbed API provides easy configuration of stub libraries for local integration tests.
Since 1.3.1 version of SDK there is the build-in unit test framework.
It is Java only right now but I feel like:
it is much the same you talk about in your question (and much more - as running test in the cloud for example)
and it is quite possible to port\implement the same on Python using SDK
So does the author of this framework - Max Ross and he explicitly tells us about it in his I/O presentation "Testing techniques for Google App Engine"
Does anyone have any updates on this topic?

Help needed--Is class necessary in Python scripting?

I am creating an interface for Python scripting.
Later I will be dong Python scripting also for automated testing.
Is it necessary the at i must use class in my code.Now I have created the code
with dictionaries,lists,functions,global and local variables.
Is class necessary?
Help me in this.
No, of course class is not a must. As Python is a scripting language, you can simply code your scripts without defining your own classes.
Classes are useful if you implement a more complex program which needs a structured approach and OOP benfits (encapsulation, polimorphism) help you in doing it.
It's not needed to make it work, but I would argue that it will become messy to maintain if you do not encapsulate certain things in classes. Classes are something that schould help the programmer to organizes his/her code, not just nice to have fluff.
No you don't need to use classes for scripting.
However, when you start using the unit testing framework unittest, that will involve classes so you need to understand at least how to sub-class the TestCase class, eg:
import unittest
import os
class TestLint(unittest.TestCase):
def testLintCreatesLog(self):
# stuff that does things to create the file lint.log removed...
assert os.path.exists('lint.log') # this should be here after lint
assert os.path.getsize('lint.log') == 0 # nothing in the log - assume happy
if __name__ == '__main__':
# When this module is executed from the command-line, run all its tests
unittest.main()
not necessary since python is not a purely object oriented language but certain things are better written in classes (encapsulation).it becomes easier to build a large project using classes

Categories