Can Pytest only instantiates one class object to test all its methods? - python

I think this should be a very common scenario, for example, if you have a class and a few methods for it. If you want to write unittest for them, like test_method_1, test_method_2, ... test_method_n, I do not want to instantiate the class object for each of these test functions, it could be redundant or inefficient. However, I read the doc for Pytest, it looks to me for example by using fixture decoration, although it appears to write class instantiation only once, actually this instantiation would be called every time when it passes to a new test function. Is there a way not to do this but instead I just create the class object only once, and all tests are done within this object ?

Fixtures can have different scopes, or, in other words, be called once for every function that uses it, or once for every test module that uses it, or once for every test session that uses it. See https://docs.pytest.org/en/latest/fixture.html#scope-sharing-a-fixture-instance-across-tests-in-a-class-module-or-session
This sample script will fail if you use the default scope (function) and pass if you use another scope.
import pytest
SCOPE="function"
#SCOPE="session"
class Shared(object):
counter = 0
def __init__(self):
self.instance_id = Shared.counter
Shared.counter += 1
#pytest.fixture(scope=SCOPE)
def shared_instance():
instance = Shared()
yield instance
def test_one(shared_instance):
assert shared_instance.instance_id == 0
def test_two(shared_instance):
assert shared_instance.instance_id == 0

Related

How to use #patch.object decorator for a method that is being called in multiple places in a class?

I'm having an implementation class where, there's this save method which is being called in multiple places within the class.
So basically that method intakes an argument and returns a file url which is a string.
In the class I'm trying to test, I'm saving multiple files in different locations. Hence how can I test that in my UnitTest class?
For eg I was able to mock the delete method like below, which is being called only once:
#patch.object(FileStoreSp, "delete_file", return_value=True)
But for the save method I'm not sure how can i test it since its being called in multipe places and it returns different values. Is there a way I can pass the return values in some sort of an order in which the method is being called?
Any help could be appreciated.
You could monkey patch the save method. You could create a temp directory and test that everything is in place after your function has run.
However, the scenario, which you describe, indicates that you probably should refactor your code to be more testable. Writing files is a so called "side-effect". Side-effects make your test harder (maybe impossible) to test. Try to avoid side-effects, if possible. And if they are really needed, then try to concentrate side effects in one place at the boundary of your system. There are many strategies to archive this. For example:
Rearrange function calls
Delegate the execution of the side effect. E.g. let the function return a value of what should be done (return "write-file", "Filename") and handle those at the top level
If you really cannot change the code (maybe its 3rd party code out of your control), then you can monkey patch nearly everything in python. How to do it best depends on your concrete scenario and code. For the unittest framework have a look at MagicMock.
If I understand correctly, you have some method on your class and you want to test that method. And that method calls another method (save) more than once. Now you want to mock out the save method, while testing that other method, which is the correct approach.
Let's abstract this for a moment. Say the method you are testing is called bar and inside it calls the method foo twice. Now foo does all sorts of stuff including side effects (disk I/O, whatever), so you obviously want to mock it during the bar test. Yet you want to ensure that foo is called in the way you expect it from bar and also that bar does something specific with the return values it gets from foo.
Thankfully, the Mock class allows you to set the side_effect attribute in various ways. One of them is setting it to an iterable. Calling the mock once then returns the next element from that iterable. This allows you to set multiple distinct return values for the mocked object in advance.
We can then leverage the assert_has_calls method of the mocked object using call objects to verify that foo was called with the expected arguments.
Here is an example to illustrate the concept:
from unittest import TestCase
from unittest.mock import MagicMock, call, patch
class MyClass:
def foo(self, string: str) -> list[str]:
print("Some side effect")
return string.split()
def bar(self, string1: str, string2: str) -> tuple[str, str]:
x = self.foo(string1)[0]
y = self.foo(string2)[0]
return x, y
class MyTestCase(TestCase):
#patch.object(MyClass, "foo")
def test_bar(self, mock_foo: MagicMock) -> None:
# Have mocked `foo` return ["a"] first, then ["b"]
mock_foo.side_effect = ["a"], ["b"]
# Thus, we expect `bar` to return ("a", "b")
expected_bar_output = "a", "b"
obj = MyClass()
# The arguments for `bar` are not important here,
# they just need to be unique to ensure correct calls of `foo`:
arg1, arg2 = MagicMock(), MagicMock()
output = obj.bar(arg1, arg2)
# Ensure the output is as expected:
self.assertTupleEqual(expected_bar_output, output)
# Ensure `foo` was called as expected:
mock_foo.assert_has_calls([call(arg1), call(arg2)])
Hope this helps.

Is there a way to mock the return for type(), without replacing with isinstance()?

I am trying to unit test a block of code, and I'm running into issues with mocking the object's type to grab the right function from a dictionary.
For example:
my_func_dict = {
Foo: foo_func,
Bar: bar_func
FooBar: foobar_func
}
def generic_type_func(my_obj):
my_func = my_func_dict[type(my_obj)]
my_func()
With this code, I can swap between functions with a key lookup, and it's pretty efficient.
When I try to mock my_obj like this, I get a KeyError:
mock_obj = Mock(spec=Foo)
generic_type_func(mock_obj)
# OUTPUT:
# KeyError: <class 'unittest.mock.Mock'>
Because it's a mock type. Although, when I check isinstance(), it returns true:
is_instance_Foo = isinstance(mock_obj, Foo)
print(is_instance_foo)
# Output:
# True
Is there any way to retain the type() check, and using the dictionary lookup via a key, while still maintaining the ability to mock the input and return_type? Or perhaps a different pattern where I can retain the performance of a dictionary, but use isinstance() instead so I can mock the parameter? Looping over a list to check the type against every possible value is not preferred.
I managed to unit test this by moving the function to the parameter itself, and implicitly calling the function from the parent. I wanted to avoid this, because now the function manipulates the parent implicitly instead of explicitly from the parent itself. It looks like this now:
def generic_type_func(self, my_obj):
my_obj.my_func(self)
The function then modifies self as needed, but implicitly instead of an explicit function on the parent class.
This:
def my_func(self, parent):
self.foo_prop = parent
Rather than:
def my_foo_func(self, foo):
foo.foo_prop = self
This works fine with a mock, and I can mock that function easily. I've just hidden some of the functionality, and edit properties on the parent implicitly instead of explicitly from within the class I'm working in. Maybe this is preferable anyways, and it looks cleaner with less code on the parent class. Every instance must have my_func this way, which is enforced via an abstract base class.

Pass closure to FunctionType in function

I have a code like this:
class A():
def __init__(self, a):
self.a = a
def outer_method(self):
def inner_method():
return self.a +1
return inner_method()
I want to write a test for inner_method. For that, I am using a code like this:
def find_nested_func(parent, child_name):
"""
Return the function named <child_name> that is defined inside
a <parent> function
Returns None if nonexistent
"""
consts = parent.__code__.co_consts
item = list(filter(lambda x:isinstance(x, CodeType) and x.co_name==child_name, consts ))[0]
return FunctionType(item, globals())
Calling it with find_nested_func(A().outer_method, 'inner_method') but it fails when calling to 'FunctionType' because the function cannot be created since 'self.a' stops existing in the moment the function stops being an inner function. I know the construction FunctionType can recive as an argument a closure that could fix this problem , but I don't know how to use it. How can I pass it?
The error it gives is the next one:
return FunctionType(item, globals())
TypeError: arg 5 (closure) must be tuple
Why are you trying to test inner_method? In most cases, you should only test parts of your public API. outer_method is part of A's public API, so test just that. inner_method is an implementation detail that can change: what if you decide to rename it? what if you refactor it slightly without modifying the externally visible behavior of outer_method? Users of the class A have no (easy) way of calling inner_method. Unit tests are usually only meant to test things that users of your class can call (I'm assuming these are for unit tests, because integration tests this granular would be strange--and the same principle would still mostly hold).
Practically, you'll have a problem extracting functions defined within another function's scope, for several reasons include variable capture. You have no way of knowing if inner_method only captures self or if outer_method performs some logic and computes some variables that inner_method uses. For example:
class A:
def outer_method():
b = 1
def inner_method():
return self.a + b
return inner_method()
Additionally, you could have control statements around the function definition, so there is no way to decide which definition is used without running outer_method. For example:
import random
class A:
def outer_method():
if random.random() < 0.5:
def inner_method():
return self.a + 1
else:
def inner_method():
return self.a + 2
return inner_method()
You can't extract inner_method here because there are two of them and you don't know which is actually used until you run outer_method.
So, just don't test inner_method.
If inner_method is truly complex enough that you want to test it in isolation (and if you do so, principled testing says you should mock out its uses, eg. its use in outer_method), then just make it a "private-ish" method on A:
class A:
def _inner_method(self):
return self.a + 1
def outer_method(self):
return self._inner_method()
Principled testing says you really shouldn't be testing underscore methods, but sometimes necessity requires it. Doing this things way allows you test _inner_method just as you would any other method. Then, when testing outer_method, you could mock it out by doing a._inner_method = Mock() (where a is the A object under test).
Also, use class A. The parens are unnecessary unless you have parent classes.

Calling functions / class methods inside a for loop

I'm working on a some classes, and for the testing process it would be very useful to be able to run the class methods in a for loop. I'm adding methods and changing their names, and I want this to automatically change in the file where I run the class for testing.
I use the function below to get a list of the methods I need to run automatically (there are some other conditional statements I deleted for the example to make sure that I only run certain methods that require testing and which only have self as an argument)
def get_class_methods(class_to_get_methods_from):
import inspect
methods = []
for name, type in (inspect.getmembers(class_to_get_methods_from)):
if 'method' in str(type) and str(name).startswith('_') == False:
methods.append(name)
return methods
Is it possible to use the returned list 'methods' to run the class methods in a for loop?
Or is there any other way to make sure i can run my class methods in my testingrunning file without having to alter or add things i changed in the class?
Thanks!
Looks like you want getattr(object, name[, default]):
class Foo(object):
def bar(self):
print("bar({})".format(self))
f = Foo()
method = getattr(f, "bar")
method()
As a side note : I'm not sure that dynamically generating lists of methods to test is such a good idea (looks rather like an antipattern to me) - now it's hard to tell without the whole project's context so take this remarks with the required grain of salt ;)

Python: calling stop on mock patch class decorator

The Mock documentation describes a simple and elegant way of applying patches to all of the tests method inside a TestCase:
#patch('foo.bar')
#patch('foo.baz')
#patch('foo.quux')
#patch('foo.narf')
class FooTest(TestCase):
def test_foo(self, bar, baz, quux, narf):
""" foo """
self.assertTrue(False)
However, one issue I've encountered with this method is that if I'd like to call stop() on one of the patches inside one of the test methods, there doesn't appear to be anyway of getting a reference to the patcher object -- the only thing that is passed into the method is the mock objects, in this case bar, baz, quux, narf.
The only way I've found to solve this problem is to move to the pattern described in the Mock docs where the patchers are instantiated and started inside the setUp method of the TestCase and stopped inside the tearDown method. This fits my purpose, but adds a lot of extra boilerplate and isn't as elegant as the class decorator approach.
Is there another way to solve this problem?
1
Say you want to temporarily restore foo.narf in a method. foo.narf is, in the context of the decorated function, a MagicMock object. This object has a _mock_wraps attribute which will be invoked when the mock is called! So at the top of your module, _narf = foo.narf, and in your test case, foo.narf._mock_wraps = _narf.
The catch is that this will only pass through to the real function, not actually swap it back, which means that some test cases will fail (e.g. if they rely on the function object actually being "itself"). And if your mock has other attributes, that could interfere (I haven't tested much) because the passthrough call to _mock_wraps() comes at the bottom of a method that first considers the other properties of the mock.
2
The patch() decorator involves each patcher (separate copies per method) being added to a list called patchings which is a field of the method itself. I.e. you can access this list as self.test_foo.patchings, and go through to find the one you want.
However, start() and stop() are not actually called when you use patch() as a decorator, and the behavior gets tricky once you start reaching in and changing it. So I wrote this context manager.
class unpatch:
def __init__(self, name, method):
compare = patch(name)
self.patcher = next((
p for p in method.patchings
if p.target == compare.getter()
and p.attribute == compare.attribute
), None)
if self.patcher is None:
raise ValueError(name)
def __enter__(self):
self.patcher.__exit__()
def __exit__(self, *exc_info):
self.patcher.__enter__()
Inside your test case, you use it like this:
with unpatch('foo.narf', self.test_foo):
foo.narf()
Disclaimer: this is hacks.

Categories