I have a custom framework which runs different code for different clients. I have monkeypatched certain methods in order to customize functionality for a client.
Here is the pattern simplified:
#import monkeypatches here
if self.config['client'] == 'cool_dudes':
from app.monkeypatches import Stuff
if self.config['client'] == 'cool_dudettes':
from app.monkeypatches import OtherStuff
Here is an example patch:
from app.framework.stuff import Stuff
def function_override(self):
return pass
Stuff.function = function_override
This works fine when the program executes as it is executed in a batch manner, spinning up from scratch every time. However, when running across unit tests, I find that the monkey patches persist across tests, causing unexpected behavior.
I realize that it would be far better to use an object oriented inheritance approach to these overrides, but I inherited this codebase and am not currently empowered to rearchitect it to that degree.
Barring properly re-architecting the program, how can I prevent these monkey patches from persisting across unit tests?
The modules, including app.framework.<whatever>, are not reloaded for every test. So, any changes in them you make persist. The same happens if your module is stateful (that's one of the reasons why global state is not such a good idea, you should rather keep state in objects).
Your options are to:
undo the monkey-patches when needed, or
change them into something more generic that would change (semi-)automatically depending on the test running, or
(preferred) Do not reinvent the wheel and use an existing, manageable, time-proven solution for your task (or at least, base your work on one if it doesn't meet your requirements completely). E.g. if you use them for mocking, see How can one mock/stub python module like urllib . Among the suggestions there is #mock.patch that does the patching for a specific test and undoes it upon its completion.
Anyone coming here looking for information about monkeypatching, might want to have a look at pytest's monkeypatch fixture. It avoids the problem of the OP by automatically undoing all modifications after the test function has finished.
Related
The following doctests fail because the first affects the next.
def createNode(type):
"""Create a new node of `type`
Example:
>>> node = createNode("transform", name="myNode")
>>> node == "myNode"
True
"""
def getAttr(path):
"""Get attribute from `path`
Example:
>>> node = createNode("transform", name="myNode")
>>> node == "myNode"
True
>>> getAttr(node + ".translateX")
0.0
"""
I need to reset the external resource - in this case the scenegraph of Autodesk Maya - prior to running each individual doctest, with a function like this..
def setup():
cmds.file(new=True, force=True)
Granted I could call the above once per test, which I am until I find a solution, but for readability and maintenance of this project I'd prefer stowing the setup away into a dedicated function, for when it grows and needs to change.
Python's native doctest and nose both support calling a setup/teardown function, but only at a per-file level.
I'm happy to use any framework, I would also accept any level of hacking to get around it. It's for use with a single module of about 30-100 doctests, to be run on Travis through GitHub.
Unfortunately the easiest and most accurate -- though not the fastest -- option is to reset the scene before each test as you suggest. Apart from some application level globals you'll be resetting all of the scene content before each test.
However, even for trivial tests (create a cube, delete it, etc) you'll be spending a lot of time creating and deleting empty scenes -- a task which Maya is unfortunately rather slow at. Still, this is the best option because it would otherwise require complicated code to manage your invariants inside the tests -- a test that fails could easily leave state in the scene that "smarter" faster methods of resetting will miss, thus messing up the test run.
A lot of maya people will use mocks instead of scene tests wherever possible : guaranteeing that the same series of calls to cmds or the api are made in the same sequence each time without guaranteeing the results. That's less satisfying as a test but will run faster and catch many accidental changes.
You may want to see if the doctest constraint is negotiable -- a doctest will have to be exec'ed or something similar, so you will not be running your tests in the same scope as user code (at least, not necessarily) which also raises the possibility of code that runs fine in the tests but fails in the runtime.
If you run your tests using a maya.standalone instance you'll be able to churn through them a lot faster; the GUI lag is a significant part of the speed issue in using cmds.file() As long as you don't use GUI elements in the code you're testing -- which is a good idea to avoid anyway, since GUI testing is a much more complex business -- you should be OK
I'm writing integration tests using plain unittest in Python (import unittest) and are creating stubs for some external services. Now I want to run the same tests with a real implementation; but also keep the stubs. That way I can run the tests with and without the stubs and compare behaviour.
I'm running my tests both from SetupTools and through PyCharm. Is there some generic way for me to set/inject/bootstrap a parameter which tells my code wether to use the stub or the real implementation? Command line preferrable. Any pointers appreciated. :)
It sounds like you are looking for a mocking framework. Mocking frameworks allow you to create a 'stub' for the method from within your test. This is good because you don't want to be inserting any test specific code into your actual code.
One of the more popular mocking frameworks for python 2.* is python-mock (in fact it comes with python 3) So you can write the code as:
from mock import MagicMock
test_foo_mocked():
bar = MagicMock()
bar.return_value = 'fake_val'
assertEqual(bar(), 'fake_val')
test_foo_real():
assertEqual(bar(), 'real_val')
Side Note:
I would really recommend that you think of these as completely unrelated tests. There are many benefits to keeping your integration tests separate from your unit tests. Thinking of them as two different ways of running the 'same test' may encourage you to write bad tests. Unit tests should be able to test things that would be difficult or impossible to test through integration tests and vice versa.
I am working on a project that aims to augment the Python socket messages with partial ordering information. The library I'm building is written in Python, and needs to be interposed on an existing system's messages sent through the socket functions.
I have read some of the resources out there, namely the answer by #Omnifarious at this question python-importing-from-builtin-library-when-module-with-same-name-exist
There is an extremely ugly and horrible thing you can do that does not
involve hooking the import mechanism. This is something you should
probably not do, but it will likely work. It turns your calendar
module into a hybrid of the system calendar module and your calendar
module.
I have implemented the import mechanism solution, but we have decided this is not the direction we'd like to take, since it relies too much on the environment. The solution to merge classes into a hybrid, rather than relying on the import mechanisms, seems to be the best approach in my case.
Why has the hybrid been called an ugly and horrible solution? I'd like to start implementing it in my project but I am wary of the warnings. It does seem a bit hackish, but since it would be part of an installation script, wouldn't it be OK to run this once?
Here is a code snippet where the interposition needs to intercept the socket message before it's sent:
class vector_clock:
def __init__(self):
"""
Initiate the clock with the object
"""
self.clock = [0,0]
def sendMessage(self):
"""
Send Message to the server
"""
self.msg = "This is the test message to that will be interposed on"
self.vector_clock.increment(0) # We are clock position 0
# Some extraneous formatting details removed for brevity….
# connectAndSend needs interpositioning to include the vector clock
self.client.connectAndSend(totalMsg);
self.client.s.close()
From my understanding of your post, you wish to modify the existing socket library to inject your own functionality into it.
Yes, this is completely doable, and possibly it is even the easiest solution to your problem, but you have to consider all of the implications of what you are doing.
The most important point is that you are not just modifying socket for yourself, but for anything that is run in any part of your process which uses the socket library unless it uses it's own class loader. I understand that there is probably some existing library you are using which uses socket and you want to inject this functionality into it, but this will affect EVERYTHING.
From this you have to consider the question: is your change 100% backwards compatible. Unless you can guarantee that you know every single use case of socket by any library used by your process (hint: you can't), then you need to make sure that it completely preserves all existing functionality or else somewhere down the road stuff in some core library is going to mysteriously break and you will have no idea why and no way to debug it. An example of something 100% backwards compatible (or as close as it is possible to get) is injecting a decorator which saves timing information to one of your own modules.
If you completely understand this and still think that your solution is a good one then I say "go for it". However, have you considered any alternatives?
If you just need to inject this functionality for a specific set of libraries that you use, then I would suggest doing something like patching: https://docs.python.org/3/library/unittest.mock.html#unittest.mock.patch
You could subclass whatever core library you want to modify and then patch the library to use your class instead. At it's core, what patch does is it modifies the global bindings used in the target module to use a different class/module than the one it had originally used.
PS. I don't think yours is a situation which calls for hooking the import mechanism.
I am about to make a game using python and libtcod roguelike game library.
More to the point, I am using PyMock because I am just starting to learn Test-Driven Development, and I am determined not to cheat. I really want to get into the habit of doing it properly, and according to TDD I need a failing unit test before I write my first line of code.
I figure my first test of my "production" code should be that its dependency, libcotdpy, is imported.
My testing file:
#!/usr/bin/python
import pymock # for mocking and unit testing
import game # my (empty) production code file, game.py
class InitializeTest(pymock.PyMockTestCase):
def test_libtcod_is_imported(self):
# How do I test that my production file imports the libtcodpy module?
if __name__=="__main__":
import unittest
unittest.main()
Please:
1) (python people) How do I test that the module is loaded?
2) (TDD people) Should I be unit testing something this basic? If not, what is the first thing I should be testing?
1) 'your_module' in sys.modules.
Don't actually use that, though:
2)
What should your library should do?
Is it “have a dependency on libcotdpy”? I think not.
You've just made a design choice that wasn't test-driven!
Write a test that demonstrates how you want to use the library. Don't think about how you're going to implement it. For example:
player = my_lib.PlayerCharacter()
assert player.position == (0, 0) # or whatever assert syntax `pymock` uses
press_key('k')
assert player.position == (0, 1)
Or something similar. (I don't know what you want your library to do, or how much libtcod provides.)
The way I usually think about TDD (and BDD) is at two levels of development: acceptance-testing level, and unit-testing level.
First thing I would do is write stories (acceptance criteria). What is the core feature of your application? Define an end-to-end scenario that explicit one feature, and goes end-to-end with it. That's your first story. Write a test for it, using an acceptance testing (or integration testing) framework. Unfortunately, I don't know Python tools, but in Java I would use JBehave, or FITnesse. It would be something very high-level, far away from the code, that considers your application as a "black box". Something like "When my input parameters are xxx, I run my application, the expected output is yyyy".
Run this test, it will fail because the underlying application doesn't exist. Create the minimal amount of classes to make it go red (and not throw an exception anymore). That's when you need to start the second phase of TDD: unit-TDD. It's basically a "descending analysis", from top-level to details, and this phase will contain a lot of red-green-refactor cycles, bringing a lot of different units in the game.
From time to time, re-run your original acceptance test, or refine it if your growing architecture and analysis forced you to make changes to specifications (theoretically, it shouldn't happen at that stage, but in practice it does, very often). When your acceptance test is completely green, you're done with that story, rinse and repeat.
All of that brings me to my point: pure TDD (I mean unit-TDD) is not practical. I mean I really like TDD, but trying to follow it religiously will be more a hassle than a help in the long run. Sometimes you will go and spike an approach to see if that goes well with the rest of your project, without writing tests first for it, and potentially rewrite it using TDD. but as long as you have acceptance tests to cover the whole lot, you're fine.
Even if there is a way to test that, I'd recommend not doing it.
Test from the client perspective (outside-in), what behavior is provided by your SUT (Game). Your tests (or your users) don't need to know (/care) that you expose this behavior using a library. As long as the behavior isn't broken, your tests should pass.
Also like another answer says, maybe you don't need the dependency - there may be a simpler solution (e.g. a hashtable might do where you instinctively jumped on a relational database). Listen to the tests... let the tests pull in behavior.
This also leaves you free to change the dependency in the future without having to fix a bunch of tests.
I have a class that connects three other services, specifically designed to make implementing the other services more modular, but the bulk of my unit test logic is in mock verification. Is there a way to redesign to avoid this?
Python example:
class Input(object): pass
class Output(object): pass
class Finder(object): pass
class Correlator(object): pass
def __init__(self, input, output, finder):
pass
def run():
finder.find(input.GetRows())
output.print(finder)
I then have to mock input, output and finder. Even if I do make another abstraction and return something from Correlator.run(), it will still have to be tested as a mock.
Just ask yourself: what exactly do you need to check in this particular test case? If this check does not rely on other classes not being dummy, then you are OK.
However, a lot of mocks is usually a smell in sense that you are probably trying to test integration without actually doing integration. So if you assume that if the class passes test with mocks, it will be fine working with real classes, than yes, you have to write some more tests.
Personally, I don't write many Unit tests at all. I'm web developer and I prefer functional tests, that test the whole application via HTTP requests, as users would. Your case may be different
There's no reason to only use unit test - Maybe integration tests would be more useful for this case. Initialize all the objects properly, use the main class a bit, and assert on the (possibly complex) results. That way you'll test interfaces, output predictability, and other things which are important further up the stack. I've used this before, and found that something which is difficult to integration test probably has too many attributes / parameters or too complicated/wrongly formatted output.
On a quick glance, this do look like the level of mocking becomes to large. If you're on a dynamic language (I'm assuming yes here since your example is in Python), I'd try to construct either subclasses of the production classes with the most problematic methods overridden and presenting mocked data, so you'd get a mix of production and mocked code. If your code path doesn't allow for instantiating the objects, I'd try monkey patching in replacement methods returning mock data.
Weather or not this is code smell also depends on the quality of mocked data. Dropping into a debugger and copy-pasting known correct data or sniffing it from the network is in my experience the preferred way of ensuring that.
Integration vs unit testing is also an economical question: how painful is it to replace unit tests with integration/functional tests? The larger the scale of your system, the more there is to gain with light-weight mocking, and hence, unit tests.