I want to have a doctest for a class, where the use of the class needs a somehow lengthy setup. Like:
class MyClass
def __init__(self, foo):
self.foo = foo
def bar(self):
"""Do bar
>>> # do a multiline generation of foo
... ...
>>> myoby = MyClass(foo)
>>> print(myobj.bar())
BAR
"""
…
def foobar(self):
"""Do foobar
>>> # do a multiline generation of foo
... ...
>>> myoby = MyClass(foo)
>>> print(myobj.foobar())
FOOBAR
"""
…
My real case has ~8 methods that I want to document and pytest. Repeating the generation of foo everywhere contradicts the DRY principle, and also generated quite lengthy and unreadable documentation. Is there a way to avoid it?
Optimally would be like
class MyClass
"""My class
An example way to create the 'foo' argument is
>>> # do a multiline generation of foo
... ...
"""
def __init__(self, foo):
self.foo = foo
def bar(self):
"""Do bar
>>> myoby = MyClass(foo)
>>> print(myobj.bar())
BAR
"""
…
def foobar(self):
"""Do foobar
>>> myoby = MyClass(foo)
>>> print(myobj.foobar())
FOOBAR
"""
…
After some struggling, I found the following solution, (ab)using a submodule of my own module:
class MyClass
"""My class
An example way to create the 'foo' argument is::
>>> # do a multiline generation of foo
>>> foo = ...
.. only:: doctest
>>> import mymodule.tests
>>> mymodule.tests.foo = foo
"""
def __init__(self, foo):
self.foo = foo
def bar(self):
"""Do bar
.. only:: doctest
>>> import mymodule.tests
>>> foo = mymodule.tests.foo
Here is a good example, using the initialization above::
>>> myoby = MyClass(foo)
>>> print(myobj.bar())
BAR
"""
…
The "tricks":
I abuse my mypackage.tests submodule to store the results of the initialization. This works, since the modules are not re-initialized in subsequent tests
Doctest (at least in pytest) doesn't care about code blocks, it just looks for the appropriate patterns and indentions. Any block is fine, even a conditional one (.. only::). The condition (doctest) is arbitrary, it just needs to be evaluated to false so that the block isn't displayed.
I am still not sure how robust this is with respect to future developments of doctest/pytest and sphinxdoc.
This answer assumes you are using pytest --doctest-modules which is hinted at in the question/comments, but isn't explicit. See pytest using fixtures. If you don't need the documentation to include the steps to make foo argument, a fixture will work. getfixture is mentioned in the documentation for doctest, but it is hard to find documentation specifically for it.
# conftest.py
import pytest
#pytest.fixture()
def foo_setup():
foo = [100, 10] # make a complex foo
return foo
# my_module.py
class MyClass:
"""
My class.
>>> foo = getfixture('foo_setup')
>>> # consider printing details of foo here
>>> myobj = MyClass(foo)
>>> myobj.foo
[100, 10]
"""
def __init__(self, foo):
self.foo = foo
def foobar(self):
"""
foobar appends 0 to foo.
>>> foo = getfixture('foo_setup')
>>> myobj = MyClass(foo)
>>> myobj.foobar()
>>> myobj.foo
[100, 10, 0]
"""
self.foo.append(0)
You can inject foo into the namespace using a fixture. This avoids having any extraneous lines in each doctest. Documentation.
# mymodule.py
import pytest
#pytest.fixture(autouse=True)
def foo_setup(doctest_namespace):
doctest_namespace["foo"] = [100, 10] # make a complex foo
# my_module.py
class MyClass:
"""
My class.
>>> # consider printing details of foo here
>>> myobj = MyClass(foo)
>>> myobj.foo
[100, 10]
"""
def __init__(self, foo):
self.foo = foo
def foobar(self):
"""
foobar appends 0 to foo.
>>> myobj = MyClass(foo)
>>> myobj.foobar()
>>> myobj.foo
[100, 10, 0]
"""
self.foo.append(0)
Related
Suppose I have some function A.foo() that instantiates and uses an instance of B, calling the member function bar on it.
How can I set return_value on a mocked instance of B when I'm testing my A class, given that I don't have access to the instance of B? Maybe some code would illustrate this better:
import unittest
import unittest.mock
import pandas
class A:
def foo(self):
b = B()
return b.bar()
class B:
def bar():
return 1
#unittest.mock.patch("__main__.B")
class MyTestCase(unittest.TestCase):
def test_case_1(self, MockB):
MockB.bar.return_value = 2
a = A()
self.assertEqual(a.foo(), 2)
test_case = MyTestCase()
test_case.test_case_1()
This fails with;
AssertionError: <MagicMock name='B().bar()' id='140542513129176'> != 2
Apparently the line MockB.bar.return_value = 2 didn't modify the return value of the method.
I think you are not initiating the MockB. You can directly mock "main.B.bar":
#unittest.mock.patch("__main__.B.bar")
class MyTestCase(unittest.TestCase):
def test_case_1(self, MockB):
MockB.return_value = 2
a = A()
self.assertEqual(a.foo(), 2)
You have just 1 mistake in your code. Replace this line:
MockB.bar.return_value = 2
To:
MockB.return_value.bar.return_value = 2
And it would work.
I assume the piece of code you pasted is just a toy example. If the class A and B lies on another file e.g. src/somedir/somefile.py, don't forget to patch the full path.
#unittest.mock.patch("src.somedir.somefile.B")
class MyTestCase(unittest.TestCase):
...
Update
To further expand on this, you can see some usage in the docs:
>>> class Class:
... def method(self):
... pass
...
>>> with patch('__main__.Class') as MockClass:
... instance = MockClass.return_value
... instance.method.return_value = 'foo'
... assert Class() is instance
... assert Class().method() == 'foo'
...
So in your case:
MockB.bar.return_value is like calling a static method e.g. print(MockB.bar())
MockB.return_value.bar.return_value is like calling a class/instance method e.g. print(MockB().bar())
To visualize this:
import unittest.mock
class SomeClass:
def method(self):
return 1
#unittest.mock.patch("__main__.SomeClass")
def test_mock(mock_class):
print(mock_class)
print(mock_class.return_value)
mock_class.method.return_value = -10
mock_class.return_value.method.return_value = -20
print(SomeClass.method())
print(SomeClass().method())
test_mock()
$ python3 test_src.py
<MagicMock name='SomeClass' id='140568144584128'>
<MagicMock name='SomeClass()' id='140568144785952'>
-10
-20
As you can see, mock_class.return_value is the one used for instance operations such as SomeClass().method().
You can solve this without mock.patch. Change the foo method to accept a factory for the dependency it should construct (DI).
class A:
def foo(self, b_factory: 'Callable[[], B]' = B):
b = b_factory()
return b.bar()
def normal_code():
a = A()
assert a.foo() == ...
def test():
dummy_b = ... # build a dummy object here however you like
a = A()
assert a.foo(b_factory=lambda: dummy_b) == 2
If we have something like:
foo.py
from bar import bar
class foo:
global black;
black = True;
bar = bar()
bar.speak()
f = foo()
bar.py
class bar:
def speak():
if black:
print "blaaack!"
else:
print "whitttte!"
when we run
python foo.py
we get
NameError: global name 'black' is not defined
What's the best practise for doing something like this?
Should I pass it in the method?
Have the bar class have a parent variable?
For context, in practise the black global is for a debugging step.
In Python, globals are specific to a module. So the global in your foo.py is not accessible in your bar.py--not the way you have it written at least.
If you want every instance of foo to have its own value of black, then use an instance variable as Ivelin has shown. If you want every instance of foo to share the same value of black use a class variable.
Using an instance variable:
# foo.py
from bar import bar
class foo:
# Python "constructor"..
def __init__(self):
# Define the instance variables
self.bar = bar()
# Make bar talk
self.bar.speak()
# Create a function for making this foo's bar speak whenever we want
def bar_speak(self):
self.bar.speak()
################################################################################
# bar.py
class bar:
# Python "constructor"..
def __init__(self):
# Define the instance variables
self.black = True
def speak(self):
if self.black:
print "blaaack!"
else:
print "whitttte!"
Playing with the code:
>>> f = foo()
blaaack!
>>> b = foo()
blaaack!
>>> b.bar.black = False
>>> b.bar_speak()
whitttte!
>>> f.bar_speak()
blaaack!
Using a class variable:
# foo.py
from bar import bar
class foo:
# Python "constructor"..
def __init__(self):
# Define the instance variables
self.bar = bar()
# Make bar talk
self.bar.speak()
# Create a function for making this foo's bar speak whenever we want
def bar_speak(self):
self.bar.speak()
################################################################################
# bar.py
class bar:
black = True
def speak():
if bar.black:
print "blaaack!"
else:
print "whitttte!"
Playing with the code:
>>> f = foo()
blaaack!
>>> b = foo()
blaaack!
>>> bar.black = False
>>> b.bar_speak()
whitttte!
>>> f.bar_speak()
whitttte!
Here is what I would do:
foo.py
from bar import bar
class foo:
bar = bar(black=True)
bar.speak()
f = foo()
bar.py
class bar:
def __init__(black):
self.black = black
def speak():
if self.black:
print "blaaack!"
else:
print "whitttte!”
This may be a stupid / trivial question, but I'm confused in this matter.
What is the encouraged (pythonic) way of declaring instance fields - in the constructor, or in the class body itself?
class Foo:
""" Foo class """
# While we are at it, how to properly document the fields?
bar = None
def __init__(self, baz):
""" Make a Foo """
self.bar = baz
OR:
class Foo:
""" Foo class """
def __init__(self, baz):
""" Make a Foo """
self.bar = baz
It's a matter of expectations. People reading your code will expect that the attributes defined at the top level of the class will be class attributes. The fact that you then always replace them in __init__ will only create confusion.
For that reason you should go with option 2, defining instance attributes inside __init__.
In terms of documenting the attributes, pick a docstring style and stick to it; I like Google's, other options include numpy's.
class Foo:
"""A class for foo-ing bazs.
Args:
baz: the baz to foo
Attributes:
bar: we keep the baz around
"""
def __init__(self, baz):
self.bar = baz
To keep it simple, let us define class Foo with a class variable bar:
In [34]: class Foo: bar = 1
Now, observe:
In [35]: a = Foo()
In [36]: a.bar
Out[36]: 1
In [37]: Foo.bar = 2
In [38]: a.bar
Out[38]: 2
A change to Foo.bar affects existing instances of the class.
For this reason, one generally avoids class variables, as opposed to instance variables unless one wants these side-effects.
I'm using Michael Foord's mock library and have a question about it.
I want to mock a property, so I do this:
eggs = mock.PropertyMock(return_value='eggs')
spam = mock.Mock()
type(spam).eggs = eggs
assert spam.eggs == 'eggs'
This works brilliantly. However I find the type() part ugly and would love to do something like this:
eggs = mock.PropertyMock(return_value='eggs')
spam = mock.Mock(eggs = eggs)
assert spam.eggs == 'eggs'
The last example doesn't work as expected, spam.eggs becomes a method instead of a property.
I know I can use mock.Mock(eggs = 'eggs') so eggs is not a method, but I want to be able to assert the property. :-)
I am using Python 2.7, but I assume unittest.Mock works too.
The patch can help you to a degree, the code is taken from official Mock document
>>> class Foo(object):
... #property
... def foo(self):
... return 'something'
... #foo.setter
... def foo(self, value):
... pass
...
>>> with patch('__main__.Foo.foo', new_callable=PropertyMock) as mock_foo:
... mock_foo.return_value = 'mockity-mock'
... this_foo = Foo()
... print this_foo.foo
... this_foo.foo = 6
...
mockity-mock
>>> mock_foo.mock_calls
[call(), call(6)]
What I want to do is something like:
class Foo(object):
def __init__(self):
pass
def f(self):
print "f"
def g(self):
print "g"
# programatically set the "default" operation
fer=Foo()
fer.__call__=fer.f
# a different instance does something else as its
# default operation
ger=Foo()
ger.__call__=ger.g
fer() # invoke different functions on different
ger() # objects depending on how they were set up.
But as of 2.7 (which I'm currently using) I can't do this, the attempts at fer()
raise an exception.
Is there a way to, in effect, set a per instance __call__ method?
The normal stuff with types.MethodType unfortunately doesn't work here since __call__ is a special method.
From the data model:
Class instances are callable only when the class has a __call__() method; x(arguments) is a shorthand for x.__call__(arguments).
This is slightly ambiguous as to what is actually called, but it's clear that your class needs to have a __call__ method.
You'll need to create some sort of hack:
class Foo(object):
def __init__(self):
pass
def f(self):
print "f"
def g(self):
print "g"
def __call__(self):
return self.__call__()
f = Foo()
f.__call__ = f.f
f()
g = Foo()
g.__call__ = g.g
g()
Careful with this though, it'll result in an infinite recursion if you don't set a __call__ on an instance before you try to call it.
Note that I don't actually recommend calling the magic attribute that you rebind __call__. The point here is to demonstrate that python translates: f() into f.__class__.__call__(f) and so there's nothing you can do to change it on a per-instance basis. the class's __call__ will be called no matter what you do -- You just need to do something to change the behavior of the class's __call__ per-instance which is easily achieved.
You could use a setter type thing to actually create methods on your class (rather than simple functions) -- and of course that could be turned into a property:
import types
class Foo(object):
def __init__(self):
pass
def f(self):
print "f"
def g(self):
print "g"
def set_func(self,f):
self.func = types.MethodType(f,self)
def __call__(self,*args,**kwargs):
self.func(*args,**kwargs)
f = Foo()
f.set_func(Foo.f)
f()
def another_func(self,*args):
print args
f.set_func(another_func)
f(1,2,3,"bar")
You might be trying to solve the wrong problem.
Since python allows procedural creation of classes you could write code like that:
>>> def create_class(cb):
... class Foo(object):
... __call__ = cb
... return Foo
...
>>> Foo1 = create_class(lambda self: 42)
>>> foo1 = Foo1()
>>> foo1()
>>> Foo2 = create_class(lambda self: self.__class__.__name__)
>>> foo2 = Foo2()
>>> foo2()
Please note thought that Foo1 and Foo2 do not have a common base class in this case. So isinstance and issubclass will not work. If you need them to have a common base class I would go for the following code:
>>> class Foo(object):
... #classmethod
... def create_subclass(cls, cb):
... class SubFoo(cls):
... __call__ = cb
... return SubFoo
...
>>> Foo1 = Foo.create_subclass(lambda self: 42)
>>> foo1 = Foo1()
>>> foo1()
>>> Foo2 = Foo.create_subclass(lambda self: self.__class__.__name__)
>>> foo1 = Foo2()
>>> foo2()
'Foo'
>>> issubclass(Foo1, Foo)
True
>>> issubclass(Foo2, Foo)
True
I really like the second way as it provides a clean class hierarchy and looks quite clean to me.
Possible solution:
class Foo(object):
def __init__(self):
self._callable = lambda s: None
def f(self):
print "f"
def set_callable(self, func):
self._callable = func
def g(self):
print "g"
def __call__(self):
return self._callable()
d = Foo()
d.set_callable(d.g)