I am trying to get documentation to build on a ReadTheDocs installation via Sphinx. The classes I am documenting inherit from a larger Framework, which I cannot easily install and thus would like to mock. However, Mock seems to be overly greedy in mocking also the classes I would actually like to document. The code in question is as follows:
# imports that need to be mocked
from big.framework import a_class_decorator, ..., SomeBaseClass
#a_class_decorator("foo", "bar")
class ToDocument(SomeBaseClass):
""" Lots of nice documentation here
which will never appear
"""
def a_function_that_is_being_documented():
""" This will show up on RTD
"""
I hit the point where I make sure I don't blindly mock the decorator, but am instead explicit in my Sphinx conf.py. Otherwise, I follow RTD suggestions for mocking modules:
class MyMock(MagicMock):
#classmethod
def a_class_decorator(cls, classid, version):
def real_decorator(theClass):
print(theClass)
return theClass
return real_decorator
#classmethod
def __getattr__(cls, name):
return MagicMock()
sys.modules['big.framework'] = MyMock()
Now I would expect that for the printout I get something referring to my to be documented class, e.g. <ToDocument ...>.
However, I always get a Mock for that class as well, <MagicMock spec='str' id='139887652701072'>, which of course does not have any of the documentation I am trying to build. Any ideas?
Turns out the problem was inheritance from a mocked class. Begin explicit about the base class and creating an empty class
class SomeBaseClass:
pass
to patch in in the conf.py solved the problem
Related
I'm building a Python library magic_lib in which I need to instantiate a Python class (let's call it SomeClass) which is defined in the Python application that would import magic_lib.
What's the appropriate way to use/work on SomeClass when I develop magic_lib, since I don't have SomeClass in the magic_lib repo?
I'm thinking to create a dummy SomeClass like this. During packaging, I then exclude this class.
from typing import Any
class SomeClass:
def __init__(self, *arg: Any, **kwargs: Any):
pass
I'm wondering if this is the right approach. If not, any suggestions how I could approach this problem.
Thanks.
Additional thoughts: maybe I could use importlib like this?
my_module = importlib.import_module('some.module.available.in.production')
some_class = my_module.SomeClass()
Here is a more specific example:
Let's say I have two repos: workflows and magic_lib. Within workflows, it has defined a class named Task. Generally, we define tasks directly within the workflows repo. Everything works just fine. Now, let's say, I want to use magic_lib to programmatically define tasks in the workflows repo. Something like the following in the workflows repo:
from magic_lib import Generator
tasks: List[Task] = Generator().generate_tasks()
In order to do that, within magic_lib, I need to somehow have access to the class Task so that I can have it returned through the function generate_tasks(). I cannot really import Task defined in workflows from magic_lib. My question is how I can have access to Task within magic_lib.
Original question:
In python, there are decorators:
from <MY_PACKAGE> import add_method
#add_method
class MyClass:
def old_method(self):
print('Old method')
Decorators are functions which take classes/functions/... as argument:
def add_method(cls):
class Wrapper(cls):
def new_method(self):
print('New method')
return Wrapper
MyClass is passed as the cls argument to the add_method decorator function. The function can return a new class which inherits from MyClass
x = MyClass()
x.old_method()
x.new_method()
We can see that the method has been added. YAY !
So to recap, decorators are a great way to pass your user's custom class to your library. Decorators are just functions so they are easy to handle.
Modified question:
Classes can be passed to functions and methods as arguments
from magic_lib import generate_five_instances
tasks: List[Task] = generate_five_instances(Task)
def generate_five_instances(cls):
return [cls() for _ in range(5)]
If you come from another language, you might find this weird, but classes are FIRST CLASS CITIZENS in Python. That means you can assign them to variables and pass them as arguments.
Python 3.6
I'm trying to modify the behavior of a third party library.
I don't want to directly change the source code.
Considering this code below:
class UselessObject(object):
pass
class PretendClassDef(object):
"""
A class to highlight my problem
"""
def do_something(self):
# Allot of code here
result = UselessObject()
return result
I'd like to substitute my own class for UselessObject
I'd like to know if using a metaclass in my module to intercept the creation of UselessObject is a valid idea?
EDIT
This answer posted by Ashwini Chaudhary on the same question, may be of use to others. As well as the below answer.
P.S. I also discovered that 'module' level __metaclass__ does't work in python 3. So my initial question of it 'being a valid idea' is False
FWIW, here's some code that illustrates Rawing's idea.
class UselessObject(object):
def __repr__(self):
return "I'm useless"
class PretendClassDef(object):
def do_something(self):
return UselessObject()
# -------
class CoolObject(object):
def __repr__(self):
return "I'm cool"
UselessObject = CoolObject
p = PretendClassDef()
print(p.do_something())
output
I'm cool
We can even use this technique if CoolObject needs to inherit UselessObject. If we change the definition of CoolObject to:
class CoolObject(UselessObject):
def __repr__(self):
s = super().__repr__()
return "I'm cool, but my parent says " + s
we get this output:
I'm cool, but my parent says I'm useless
This works because the name UselessObject has its old definition when the CoolObject class definition is executed.
This is not a job for metaclasses.
Rather, Python allows you to do this through a technique called "Monkeypatching", in which you, at run time, substitute one object for another in run time.
In this case, you'd be changing the thirdyparty.UselessObject for your.CoolObject before calling thirdyparty.PretendClassDef.do_something
The way to do that is a simple assignment.
So, supposing the example snippet you gave on the question is the trirdyparty module, on the library, your code would look like:
import thirdyparty
class CoolObject:
# Your class definition here
thirdyparty.UselesObject = Coolobject
Things you have to take care of: that you change the object pointed by UselessObject in the way it is used in your target module.
If for example, your PretendedClassDef and UselessObject are defined in different modules, you have to procees in one way if UselessObject is imported with from .useless import UselessObject (in this case the example above is fine), and import .useless and later uses it as useless.UselessObject - in this second case, you have to patch it on the useless module.
Also, Python's unittest.mock has a nice patch callable that can properly perform a monkeypatching and undo it if by some reason you want the modification to be valid in a limited scope, like inside a function of yours, or inside a with block. That might be the case if you don't want to change the behavior of the thirdyparty module in other sections of your program.
As for metaclasses, they only would be of any use if you would need to change the metaclass of a class you'd be replacing in this way - and them they only could have any use if you'd like to insert behavior in classes that inherit from UselessObject. In that case it would be used to create the local CoolObject and you'd still perform as above, but taking care that you'd perform the monkeypatching before Python would run the class body of any of the derived classes of UselessObject, taking extreme care when doing any imports from the thirdparty library (that would be tricky if these subclasses were defined on the same file)
This is just building on PM 2Ring's and jsbueno's answers with more contexts:
If you happen to be creating a library for others to use as a third-party library (rather than you using the third-party library), and if you need CoolObject to inherit UselessObject to avoid repetition, the following may be useful to avoid an infinite recursion error that you might get in some circumstances:
module1.py
class Parent:
def __init__(self):
print("I'm the parent.")
class Actor:
def __init__(self, parent_class=None):
if parent_class!=None: #This is in case you don't want it to actually literally be useless 100% of the time.
global Parent
Parent=parent_class
Parent()
module2.py
from module1 import *
class Child(Parent):
def __init__(self):
print("I'm the child.")
class LeadActor(Actor): #There's not necessarily a need to subclass Actor, but in the situation I'm thinking, it seems it would be a common thing.
def __init__(self):
Actor.__init__(self, parent_class=Child)
a=Actor(parent_class=Child) #prints "I'm the child." instead of "I'm the parent."
l=LeadActor() #prints "I'm the child." instead of "I'm the parent."
Just be careful that the user knows not to set a different value for parent_class with different subclasses of Actor. I mean, if you make multiple kinds of Actors, you'll only want to set parent_class once, unless you want it to change for all of them.
I'm trying to use an abstract factory in Python, minimally reproduced with the following 3 files:
test_factory.py
from factory import Factory
def test_factory():
factory = Factory.makeFactory()
product = factory.makeProduct('Hi there')
print(product)
if __name__ == '__main__':
test_factory()
factory.py
from abc import ABCMeta, abstractmethod
from product import ConcreteProduct
class Factory(metaclass=ABCMeta):
#classmethod
#abstractmethod
def makeProduct(cls):
pass
#classmethod
def makeFactory(cls):
return ConcreteFactory()
class ConcreteFactory(Factory):
#classmethod
def makeProduct(cls, message):
return ConcreteProduct(message)
product.py
class ConcreteProduct(object):
def __init__(self, message):
self._message = message
def __str__(self):
return self._message
What I'm having trouble figuring out is how to mock this code to verify that ConcreteProduct.__init__ is invoked with an appropriate value. Since the test file never sees product.py, I'm not sure how to accomplish this, or if it's even possible. I suspect that there is something more fundamentally wrong with my design here.
The simplest way is patch factory.ConcreteProduct module reference.
So your test can be (not tested):
from factory import Factory
from unittest.mock import *
#patch("factory.ConcreteProduct")
def test_factory(mock_product_factory):
mock_product = mock_product_factory.return_value
factory = Factory.makeFactory()
product = factory.makeProduct('Hi there')
self.assertIs(product, mock_product)
mock_product_factory.assert_called_with('Hi there')
if __name__ == '__main__':
test_factory()
If (and only if) ConcreteProduct reference in factory module desn't exist in your test environment you can use create=True patch's attribute to inject it.
I would like to point out that ConcreteProduct reference in factory module is already a factory. Class in Python are factories yet, this is not a typed language and the factory concept is less rigid than java. I'm coming from Java background and I still use factories even in python but they become real useful when you should manipulate the input to create a correct object, if your factory method is just an arguments pass through to a class reference consider to remove the man in the middle.
So far I've ended up with a solution that I was led to by #Michele dAmico's answer, and it is very close to his.
test_factory.py becomes:
from factory import Factory
from unittest.mock import patch
#patch('product.ConcreteProduct.__init__', return_value=None)
def test_factory(mock_init):
factory = Factory.makeFactory()
product = factory.makeProduct('Hi there')
mock_init.assert_called_with('Hi there')
if __name__ == '__main__':
test_factory()
Note that I'm patching product., not factory., so I'm basically sidestepping factory.py and mocking what I know it's going to import. I don't know how I feel about breaking encapsulation this way, but honestly that's the way I feel about mocking in general.
I preferred this over the other answer because it's a little shorter, and because according to the mock docs it can be dangerous:
By default patch will fail to replace attributes that don’t exist. If
you pass in create=True, and the attribute doesn’t exist, patch will
create the attribute for you when the patched function is called, and
delete it again afterwards. This is useful for writing tests against
attributes that your production code creates at runtime. It is off by
default because it can be dangerous. With it switched on you can write
passing tests against APIs that don’t actually exist!
I would certainly be interested in further discussion as I still have a sense that I can learn some better design methods to make this cleaner.
I've been fighting with nose and fixtures today and can't help but feel I'm doing something wrong.
I had all my tests written as functions and everything was working OK, but I want to move them to test classes as I think it's tidier and the tests can then be inherited from.
I create a connection to the testdb, the transaction to use, and a number of fixtures at a package level for all tests. This connection is used by repository classes in the application to load test data from the DB.
I tried to create my test classes with the class they are to test as an attribute. This was set in the __init__ function of the test class.
The problem I had is the class I was testing needs to be instantiated with data from my test DB, but Nose creates an instance of the test class before the package setup function is run. This mean that there are no fixtures when the test classes are created and the tests fail.
The only way I can get it to work is by adding a module setup function that defines the class I want to test as a global, then use that global in the test methods, but that seems quite hacky. Can anyone suggest a better way?
That isn't very clear, so here is an example of my test module...
from nose.tools import *
from myproject import repositories
from myproject import MyClass
def setup():
global class_to_test
db_data_repo = repositories.DBDataRepository()
db_data = db_data_repo.find(1)
class_to_test = MyClass(db_data)
class TestMyClass(object):
def test_result_should_be_an_float(self):
result = class_to_test.do_something()
assert isinstance(result, float)
If you know a better way to achieve this then please share :) thanks!
If you are moving toward unittest.TestCase class based testing, you might as well use setUp()/tearDown() method that will run before each test method, and setUpClass() to setup up class specific things, and nose will handle it just as unittest will, something like this:
class TestMyClass(unittest.TestCase):
#classmethod
def setUpClass(cls):
db_data_repo = repositories.DBDataRepository()
db_data = db_data_repo.find(1)
cls.class_to_test = MyClass(db_data)
def test_result_should_be_an_float(self):
result = self.class_to_test.do_something()
assert isinstance(result, float)
I am unhappy with TestCase.setUp(): If a test has a decorator, setUp() gets called outside the decorator.
I am not new to python, I can help myself, but I search a best practice solution.
TestCase.setUp() feels like a way to handle stuff before decorators where introduced in Python.
What is a clean solution to setup a test, if the setup should happen inside the decorator of the test method?
This would be a solution, but setUp() gets called twice:
class Test(unittest.TestCase):
def setUp(self):
...
#somedecorator
def test_foo(self):
self.setUp()
Example use case: somedecorator opens a database connection and it works like a with-statement: It can't be broken into two methods (setUp(), tearDown()).
Update
somedecorator is from a different application. I don't want to modify it. I could do some copy+paste, but that's not a good solution.
Ugly? Yes:
class Test(unittest.TestCase):
def actualSetUp(self):
...
def setUp(self):
if not any('test_foo' in i for i in inspect.stack()):
self.actualSetUp()
#somedecorator
def test_foo(self):
self.actualSetUp()
Works by inspecting the current stack and determining whether we are in any function called 'test_foo', so that should obviously be a unique name.
Probably cleaner to just create another TestCase class specifically for this test, where you can have no setUp method.