I'm working on a some classes, and for the testing process it would be very useful to be able to run the class methods in a for loop. I'm adding methods and changing their names, and I want this to automatically change in the file where I run the class for testing.
I use the function below to get a list of the methods I need to run automatically (there are some other conditional statements I deleted for the example to make sure that I only run certain methods that require testing and which only have self as an argument)
def get_class_methods(class_to_get_methods_from):
import inspect
methods = []
for name, type in (inspect.getmembers(class_to_get_methods_from)):
if 'method' in str(type) and str(name).startswith('_') == False:
methods.append(name)
return methods
Is it possible to use the returned list 'methods' to run the class methods in a for loop?
Or is there any other way to make sure i can run my class methods in my testingrunning file without having to alter or add things i changed in the class?
Thanks!
Looks like you want getattr(object, name[, default]):
class Foo(object):
def bar(self):
print("bar({})".format(self))
f = Foo()
method = getattr(f, "bar")
method()
As a side note : I'm not sure that dynamically generating lists of methods to test is such a good idea (looks rather like an antipattern to me) - now it's hard to tell without the whole project's context so take this remarks with the required grain of salt ;)
Related
I'm having an implementation class where, there's this save method which is being called in multiple places within the class.
So basically that method intakes an argument and returns a file url which is a string.
In the class I'm trying to test, I'm saving multiple files in different locations. Hence how can I test that in my UnitTest class?
For eg I was able to mock the delete method like below, which is being called only once:
#patch.object(FileStoreSp, "delete_file", return_value=True)
But for the save method I'm not sure how can i test it since its being called in multipe places and it returns different values. Is there a way I can pass the return values in some sort of an order in which the method is being called?
Any help could be appreciated.
You could monkey patch the save method. You could create a temp directory and test that everything is in place after your function has run.
However, the scenario, which you describe, indicates that you probably should refactor your code to be more testable. Writing files is a so called "side-effect". Side-effects make your test harder (maybe impossible) to test. Try to avoid side-effects, if possible. And if they are really needed, then try to concentrate side effects in one place at the boundary of your system. There are many strategies to archive this. For example:
Rearrange function calls
Delegate the execution of the side effect. E.g. let the function return a value of what should be done (return "write-file", "Filename") and handle those at the top level
If you really cannot change the code (maybe its 3rd party code out of your control), then you can monkey patch nearly everything in python. How to do it best depends on your concrete scenario and code. For the unittest framework have a look at MagicMock.
If I understand correctly, you have some method on your class and you want to test that method. And that method calls another method (save) more than once. Now you want to mock out the save method, while testing that other method, which is the correct approach.
Let's abstract this for a moment. Say the method you are testing is called bar and inside it calls the method foo twice. Now foo does all sorts of stuff including side effects (disk I/O, whatever), so you obviously want to mock it during the bar test. Yet you want to ensure that foo is called in the way you expect it from bar and also that bar does something specific with the return values it gets from foo.
Thankfully, the Mock class allows you to set the side_effect attribute in various ways. One of them is setting it to an iterable. Calling the mock once then returns the next element from that iterable. This allows you to set multiple distinct return values for the mocked object in advance.
We can then leverage the assert_has_calls method of the mocked object using call objects to verify that foo was called with the expected arguments.
Here is an example to illustrate the concept:
from unittest import TestCase
from unittest.mock import MagicMock, call, patch
class MyClass:
def foo(self, string: str) -> list[str]:
print("Some side effect")
return string.split()
def bar(self, string1: str, string2: str) -> tuple[str, str]:
x = self.foo(string1)[0]
y = self.foo(string2)[0]
return x, y
class MyTestCase(TestCase):
#patch.object(MyClass, "foo")
def test_bar(self, mock_foo: MagicMock) -> None:
# Have mocked `foo` return ["a"] first, then ["b"]
mock_foo.side_effect = ["a"], ["b"]
# Thus, we expect `bar` to return ("a", "b")
expected_bar_output = "a", "b"
obj = MyClass()
# The arguments for `bar` are not important here,
# they just need to be unique to ensure correct calls of `foo`:
arg1, arg2 = MagicMock(), MagicMock()
output = obj.bar(arg1, arg2)
# Ensure the output is as expected:
self.assertTupleEqual(expected_bar_output, output)
# Ensure `foo` was called as expected:
mock_foo.assert_has_calls([call(arg1), call(arg2)])
Hope this helps.
I am trying to unit test a block of code, and I'm running into issues with mocking the object's type to grab the right function from a dictionary.
For example:
my_func_dict = {
Foo: foo_func,
Bar: bar_func
FooBar: foobar_func
}
def generic_type_func(my_obj):
my_func = my_func_dict[type(my_obj)]
my_func()
With this code, I can swap between functions with a key lookup, and it's pretty efficient.
When I try to mock my_obj like this, I get a KeyError:
mock_obj = Mock(spec=Foo)
generic_type_func(mock_obj)
# OUTPUT:
# KeyError: <class 'unittest.mock.Mock'>
Because it's a mock type. Although, when I check isinstance(), it returns true:
is_instance_Foo = isinstance(mock_obj, Foo)
print(is_instance_foo)
# Output:
# True
Is there any way to retain the type() check, and using the dictionary lookup via a key, while still maintaining the ability to mock the input and return_type? Or perhaps a different pattern where I can retain the performance of a dictionary, but use isinstance() instead so I can mock the parameter? Looping over a list to check the type against every possible value is not preferred.
I managed to unit test this by moving the function to the parameter itself, and implicitly calling the function from the parent. I wanted to avoid this, because now the function manipulates the parent implicitly instead of explicitly from the parent itself. It looks like this now:
def generic_type_func(self, my_obj):
my_obj.my_func(self)
The function then modifies self as needed, but implicitly instead of an explicit function on the parent class.
This:
def my_func(self, parent):
self.foo_prop = parent
Rather than:
def my_foo_func(self, foo):
foo.foo_prop = self
This works fine with a mock, and I can mock that function easily. I've just hidden some of the functionality, and edit properties on the parent implicitly instead of explicitly from within the class I'm working in. Maybe this is preferable anyways, and it looks cleaner with less code on the parent class. Every instance must have my_func this way, which is enforced via an abstract base class.
Here's a simple class created declaratively:
class Person:
def say_hello(self):
print("hello")
And here's a similar class, but it was defined by invoking the metaclass manually:
def say_hello(self):
print("sayolala")
say_hello.__qualname__ = 'Person.say_hello'
TalentedPerson = type('Person', (), {'say_hello': say_hello})
I'm interested to know whether they are indistinguishable. Is it possible to detect such a difference from the class object itself?
>>> def was_defined_declaratively(cls):
... # dragons
...
>>> was_defined_declaratively(Person)
True
>>> was_defined_declaratively(TalentedPerson)
False
This should not matter, at all. Even if we dig for more attributes that differ, it should be possible to inject these attributes into the dynamically created class.
Now, even without the source file around (from which, things like inspect.getsource can make their way, but see below), class body statements should have a corresponding "code" object that is run at some point. The dynamically created class won't have a code body (but if instead of calling type(...) you call types.new_class you can have a custom code object for the dynamic class as well - so, as for my first statement: it should be possible to render both classes indistinguishable.
As for locating the code object without relying on the source file (which, other than by inspect.getsource can be reached through a method's .__code__ attibute which anotates co_filename and co_fistlineno (I suppose one would have to parse the file and locate the class statement above the co_firstlineno then)
And yes, there it is:
given a module, you can use module.__loader__.get_code('full.path.tomodule') - this will return a code_object. This object has a co_consts attribute which is a sequence with all constants compiled in that module - among those are the code objects for the class bodies themselves. And these, have the line number, and code objects for the nested declared methods as well.
So, a naive implementation could be:
import sys, types
def was_defined_declarative(cls):
module_name = cls.__module__
module = sys.modules[module_name]
module_code = module.__loader__.get_code(module_name)
return any(
code_obj.co_name == cls.__name__
for code_obj in module_code.co_consts
if isinstance(code_obj, types.CodeType)
)
For simple cases. If you have to check if the class body is inside another function, or nested inside another class body, you have to do a recursive search in all code objects .co_consts attribute in the file> Samething if you find if safer to check for any attributes beyond the cls.__name__ to assert you got the right class.
And again, while this will work for "well behaved" classes, it is possible to dynamically create all these attributes if needed - but that would ultimately require one to replace the code object for a module in sys.__modules__ - it starts to get a little more cumbersome than simply providing a __qualname__ to the methods.
update
This version compares all strings defined inside all methods on the candidate class. This will work with the given example classess - more accuracy can be achieved by comparing other class members such as class attributes, and other method attributes such as variable names, and possibly even bytecode. (For some reason, the code object for methods in the module's code object and in the class body are different instances,though code_objects should be imutable) .
I will leave the implementation above, which only compares the class names, as it should be better for understanding what is going on.
def was_defined_declarative(cls):
module_name = cls.__module__
module = sys.modules[module_name]
module_code = module.__loader__.get_code(module_name)
cls_methods = set(obj for obj in cls.__dict__.values() if isinstance(obj, types.FunctionType))
cls_meth_strings = [string for method in cls_methods for string in method.__code__.co_consts if isinstance(string, str)]
for candidate_code_obj in module_code.co_consts:
if not isinstance(candidate_code_obj, types.CodeType):
continue
if candidate_code_obj.co_name != cls.__name__:
continue
candidate_meth_strings = [string for method_code in candidate_code_obj.co_consts if isinstance(method_code, types.CodeType) for string in method_code.co_consts if isinstance(string, str)]
if candidate_meth_strings == cls_meth_strings:
return True
return False
It is not possible to detect such difference at runtime with python.
You can check the files with a third-party app but not in the language since no matter how you define your classes they should be reduced to the objects which the interpreter knows how to manage.
Everything other is syntax sugar and its death with at the preprocessing step of the operations on the text.
The whole metaprogramming is a technique that lets you close to the compiler/interpreter work.
Revealing some of the type traits and giving you the freedom to work on the type with code.
It is possible — somewhat.
inspect.getsource(TalentedPerson) will fail with an OSError, whereas it will succeed with Person. This only works though if you don't have a class of that name in the file where it was defined:
If your file consists of both of these definitions, and TalentedPerson also believes it is Person, then inspect.getsource will simply find Person's definition.
Obviously this relies on the source code still being around and findable by inspect — this won't work with compiled code, e.g. in the REPL, can be tricked, and is sort of cheating. The actual code objects don't differ AFAIK.
I'm trying to override the dir method in a Python class. Inside I want to call the builtin dir method, but my attempts at calling it seem to be calling my method, rather than the default implementation.
The correct way to override the __dir__ method in python
This question seemed relevant but doesn't answer the question since it seems their issue was with extending a numpy class, not with calling the default dir implementation for the class
Relevant code
def __dir__(self):
return builtins.dir(self) + self.fields()
In other words I want everything that would normally be listed plus some other things, but the builtins.dir() call just calls this function recursively.
I'm testing this in Python 3.6
Since overriding __dir__ only matters in class' instances, you can do this:
class Test:
def test(self):
print('hey')
def __dir__(self):
return dir(Test) + ['hello']
Notice that dir(Test) is different from dir(Test()) because only the latter calls Test.__dir__.
Using dir(super()) inside Test.__dir__ also kinda works, but it only provides you with data for the parent class, so that dir(Test()) won't contain the names of the attributes present exclusively in the class Test.
I want to add two variables to my subclass which is inherited from unittest.testcase
like I have:
import unittest
class mrp_repair_test_case(unittest.TestCase):
def __init__(self, a=None, b=None, methodName=['runTest']):
unittest.TestCase.__init__(self)
self.a= a
self.b = b
def test1(self):
..........
.......
def runtest()
mrp_repair_test_case(a=10,b=20)
suite = unittest.TestLoader().loadTestsFromTestCase(mrp_repair_test_case)
res = unittest.TextTestRunner(stream=out,verbosity=2).run(suite)
how can I acvhieve this:
I am getting this error:
ValueError: no such test method in ****<class 'mrp_repair.unit_test.test.mrp_repair_test_case'>:**** runTest
thanks
At first glance, it looks like you need to create an instance of mrp_repair_test_case. Your current line:
mrp_repair_test_case(a=10,b=20)
doesn't actually do anything.
Try (not tested):
def runtest():
m = mrp_repair_test_case(a=10, b=20)
suite = unittest.TestLoader().loadsTestsFromTestCase(m)
res = unittest.TextTestRunner(stream=out, verbosity=2).run(suite)
This assumes you've set up 'out' as a stream already.
Edit:
By the way, is there any reason you're not using a setUp method to set these values? That would be normal best practice. Looking at the documentation of loadTestsFromTestCase it looks like it will only accept the Class itself not an instance of it, which would mean you're rather working against the design of the unittest module.
Edit 2:
In response to your further information, I would actually set your uid and cursor values seperately at module level before calling the tests. I'm not a huge fan of globals normally, but if I'm understanding you correctly these values will be A) read-only B) always the same for the same customer which avoids most of the normal pitfalls in using them.
Edit 3:
To answer your edit, if you really want to use __init__ you probably can, but you will have to roll your own loadsTestsFromTestCase alternative, and possibly your own TestSuite (you'll have to check the internals of how it works). As I said above, you'll be working against the existing design of the module - to the extent that if you decide to do your testing that way it might be easier to roll your own solution completely from scratch than use unittest. Amend: just checked, you'd definately have to roll your own version of TestSuite, as the existing one creates a new instance of the TestCaseClass for each test.