I'm unit testing a module I wrote and encountering a problem with default class object provided to a function that has a mock for it.
This is how it looks in high level:
main_file.py
class MainClass(object):
def main_func(self):
sub_class_obj = SubClass()
sub_class_obj.sub_func()
sub_file.py
class SubClass(object):
def sub_func(self, my_att=Helper(2)):
self.my_att = my_att
helpers.py
class Helper():
def __init__(self, my_val):
self.my_val = my_val
test.py
class TestClass(object):
#patch('sub_file.Helper', MockHelper)
def my_test(self):
main_class_obj = MainClass()
main_class_obj.main_func()
When I do that in a way that my_att is provided - all works well and the Mock is called, but when I don't and the default value is set - I get the original Helper class object.
Any idea how to make the default value for this attribute to receive the mock as well?
Thanks in advance!
The problem is that the default value is read at import time, so it is already set in the function before you patch Helper. The defaults are saved in the function object at that point.
You can, however, also patch the default arguments of your function (which can be accessed via __defaults__:
from sub_file import SubClass
class TestClass(object):
#patch('sub_file.Helper', MockHelper)
#patch.object(SubClass.sub_func, "__defaults__", (MockHelper(),))
def my_test(self):
main_class_obj = MainClass()
main_class_obj.main_func()
Note that __defaults__ has to be a tuple of arguments.
You could also use monkeypatch to do the same:
from sub_file import SubClass
class TestClass(object):
#patch('sub_file.Helper', MockHelper)
def my_test(self, monkeypatch):
monkeypatch.setattr(SubClass.sub_func, "__defaults__", (MockHelper(),)
main_class_obj = MainClass()
main_class_obj.main_func()
UPDATE:
I didn't realize that this would not work with Python 2. Apart from the other name of the default arguments (func_defaults instead of __defaults__) this would only work with standalone functions, but not with methods, as setattr is not supported in this case in Python 2. Here is a workaround for Python 2:
from sub_file import SubClass
class TestClass(object):
#patch('sub_file.Helper', MockHelper)
def my_test(self):
orig_sub_func = SubClass.sub_func
with patch.object(SubClass, "sub_func",
lambda o, attr=MockHelper(): orig_sub_func(o, attr)):
main_class_obj = MainClass()
main_class_obj.main_func()
This way, the original sub_func is replaced by a function that has its own default value, but otherwise delegates the functionality to the original function.
UPDATE 2:
Just saw the answer by #chepner, and it is correct: the best way would be to refactor your code accordingly. Only if you cannot do this, you try this answer.
Default values are created when the function is defined, not when it is called. It's too late to patch Helper in your test, because SubClass.__init__ has already been defined.
Rather than patching anything, though, re-write MainClass so that there is no hard-coded reference to SubClass: then you can create the proper instance yourself without relying on a default value.
You can pass an instance directly:
class MainClass(object):
def main_func(self, sub_class_obj):
sub_class_obj.sub_func()
class TestClass(object):
def my_test(self):
mock_obj = MockHelper()
main_class_obj = MainClass(mock_obj)
main_class_obj.main_func()
or take a factory function that will be called to create the subclass object.
class MainClass(object):
def main_func(self, factory=SubClass):
sub_class_obj = factory()
sub_class_obj.sub_func()
class TestClass(object):
def my_test(self):
main_class_obj = MainClass(lambda: SubClass(MockHelper()))
main_class_obj.main_func()
Related
I'm creating a script that's actually an environment for other users to write code on.
I've declared several methods in a class and instantiate an object so users can use those methods as simple interpreter functions, like so:
from code import interact
class framework:
def method1(self, arg1):
# code for method1
def method2(self, arg2):
# code for method2
def main():
fw = framework()
# Aliases
method1 = fw.method1
method2 = fw.method2
interact(local=locals())
Since I don't want the user to call the methods with fw.method1(arg), I setup aliases. The problem is, since the framework class is under development, I have to keep updating the main script with new aliases for the methods I create.
Is there a simple way of getting rid of the "fw." part on the calls and have all the methods under class framework automatically visible under main?
You control the dictionary passed to interact, so put what you want in it without worrying with local variables in main:
d={}
for n in vars(framework):
if n.startswith('_'): continue # probably
x=getattr(fw,n)
if callable(x): d[n]=x
interact(local=d)
This works for regular, static, and class methods. It doesn’t see instance attributes (unless they’re shadowing a class attribute) or inherited methods.
The former matters only if the instance stores functions as attributes (in which case you might or might not want them available). The latter is convenient in that it omits object; if there are other base classes, they can easily be included by searching framework.__mro__ (and still omitting object).
I don't know what you plan to do with the functions, but if they are not static it will not likely work outside of the class:
fs = [func for func in dir(c) if callable(getattr(c, func)) and not func.startswith("__")]
for func in fs:
globals()[func] = getattr(c, func)
This will put all custom functions within the class c to global scope.
You basically want to do two things:
get a list of methods of a class instance
dynamically add functions to your local scope
The former can be achieved with the inspect module from the standard library, the latter by using vars.
Try the following:
import inspect
from code import interact
class framework:
def method1(self, arg1):
# code for method1
def method2(self, arg2):
# code for method2
def main():
fw = framework()
# get the methods of fw and update vars().
# getmembers returns a list of two-tuples with (<Name>, <function>)
methods = inspect.getmembers(fw, predicate=inspect.ismethod)
vars().update({method[0]: method[1] for method in methods})
interact(local=locals())
Here's something similar to #Davis Herring's answer, fleshed-out and repackaged:
#from code import interact
def interact(local):
local['method1'](42)
local['method2']('foobar')
class Framework:
def method1(self, arg1):
print('Framework.method1({!r}) called'.format(arg1))
def method2(self, arg2):
print('Framework.method2({!r}) called'.format(arg2))
def get_callables(obj):
return {name: getattr(obj, name) for name in vars(obj.__class__)
if callable(getattr(obj, name))}
def main():
fw = Framework()
# # Aliases
# method1 = fw.method1
# method2 = fw.method2
interact(local=get_callables(fw))
if __name__ == '__main__':
main()
Output:
Framework.method1(42) called
Framework.method2('foobar') called
This would be the layout
some_function.py
def some_function():
print("some_function")
some_library.py
from some_function import some_function
class A:
def xxx(self):
some_function()
main.py
from some_library import A
from some_function import some_function
def new_some_function():
print("new_some_function")
if __name__ == '__main__':
some_function = new_some_function
a = A()
a.xxx()
In the class A, the method xxx, calls some_function, so is it possible to override it with something else, without re-implementing the entire class?
I think you are looking for monkey patching (means changing classes/modules dynamically while running). This way you don't need to overwrite the class A and use inheritance as suggested by other comments - you said you don't want that, so try this solution:
import some_class # import like this or will not work (cos of namespaces)
def new_some_function():
print("new_some_function")
if __name__ == '__main__':
# Import and use like this, other ways of import will not work.
# Because other way imports method to your namespace and then change it in your namespace,
# but you need to change it in the original namespace
some_class.some_function = new_some_function
That way replace the original method and even other classes will use it then. Be careful, if the original method is a class/instance method, you need to create new function with proper params, like this:
def new_some_function(self):
# for instance methods, you may add other args, but 'self' is important
def new_some_function(cls):
# for class methods, you may add other args, but 'cls' is important
You provide very little information about your use case here. As one of the comments points out, this might be a case for inheritance. If you are in a testing context, you may not want to use inheritance though, but you might rather want to use a mock-object.
Here is the inheritance version:
from some_library import A
def new_some_function():
print("new_some_function")
class B(A):
def xxx(self):
new_some_function()
if __name__ == '__main__':
a = B()
a.xxx()
Note, how class B derives from class A through the class B(A) statement. This way, class B inherits all functionality from A and the definition of class B only consists of the parts where B differs from A. In your example, that is the fact that the xxx method should call new_some_function instead of some_function.
Here is the mock version:
from unittest import mock
from some_library import A
def new_some_function():
print("new_some_function")
if __name__ == '__main__':
with mock.patch('some_library.some_function') as mock_some_function:
mock_some_function.side_effect = new_some_function
a = A()
a.xxx()
As mentioned above this approach is mostly useful if you are in a testing context and if some_function does something costly and/or unpredictable. In order to test code that involves a call to some_function, you may temporarily want to replace some_function by something else, that is cheap to call and behaves in a predictable way. In fact, for this scenario, replacing some_function by new_some_function might even be more than what is actually needed. Maybe, you just want an empty hull that can be called and that always returns the same value (instead of the side_effect line, you can specify a constant .return_value in the above code example). One of the key functionalities of mock objects is that you can later check if that function has been called. If testing is your use case, I would very much recommend looking at the documentation of the python mock module.
Note that the example uses the mock.patch context manager. This means that within the managed context (i.e. the block inside the with-statement) some_library.some_function is replaced by a mock object, but once you leave the managed context, the original functionality is put back in place.
You may just create another class and override the method you need.
Just as an example:
class myInt(int):
def __pow__(self, x):
return 0
a = myInt(10)
a+10 # 20
a**2 # 0
In this case a is an int and has access to all the method of the int class, but will use the __pow__ method I've defined.
What you need is inheritance, You can subclass a class and with super method you can inherit all the parent class functions. If you want to override parent class functions, you just need to provide a different implementation by the same name in child class.
from some_library import A
from some_function import some_function
def new_some_function():
print("new_some_function")
class B(A):
def __init__(*args, **kwargs):
super().__init__(self)
pass
def xxx(self):
new_some_function()
if __name__ == '__main__':
a = B()
a.xxx()
Output:
new_some_function
you syntax may differ depending upon the python version.
In python3
class B(A):
def __init__(self):
super().__init__()
In Python 2,
class B(A):
def __init__(self):
super(ChildB, self).__init__()
I want to test a class created with default parameters by replacing the value of the default parameter during unit testing.
For example, I want the following line (throughout the code)
obj = SomeClass()
To look like it was called with a parameter
obj = SomeClass(overiden_parameter)
One solution might be to create simple subclass:
```
def OriginalClass(object):
def __init_(some_param="default_value"):
...
```
```
def MockedOriginalClass(OriginalClass):
def __init_():
super(MockedOriginalClass, self).__init__("some_other_value)
...
```
How to a mock/patch OriginalClass to be MockedOriginalClass thoughout the code? Keep in mind that I do want to keep functionality of the original class, the only one thing I want to change is it's default __init__ parameter.
I feel this is a very simple thing to do with Mocking, I just didn't quite figure how to do it.
I found out about this question:
Python unittest mock: Is it possible to mock the value of a method's default arguments at test time?
It's very close but I don't think the same trick can be applied to the __init__ method.
One way to do this is by mocking the whole class for you specific tests like this:
Example:
I have a class SomeClass that I want to mock. My mocking class name is MockSomeClass that will mock class SomeClass.
class MockSomeClass(SomeClass):
'''
Mock Class
'''
def __init__(overiden_parameter):
self.overiden_parameter = overiden_parameter
So during the test, you will use the mock class which has overridden functionality and while the other functions behavior will remain same(inheritance).
Patching
mock_some_class_obj = MockSomeClass()
#mock.patch('SomeClass', return_value=mock_some_class_obj)
def test1(self, mock_some_class_obj):
'''
Test 1
'''
obj = SomeClass()
catch so in the code whenever you will create the object of SomeClass the object of the mock class will be returned. in the mock class, you can add your own functionality.
Look at #Martinj Pieters comment, but alternatively, you could use monkey patching https://en.wikipedia.org/wiki/Monkey_patch which is supported in pytest https://docs.pytest.org/en/documentation-restructure/how-to/monkeypatch.html to override the __init__ method.
For a recursive function we can do:
def f(i):
if i<0: return
print i
f(i-1)
f(10)
However is there a way to do the following thing?
class A:
# do something
some_func(A)
# ...
If I understand your question correctly, you should be able to reference class A within class A by putting the type annotation in quotes. This is called forward reference.
class A:
# do something
def some_func(self, a: 'A')
# ...
See ref below
https://github.com/python/mypy/issues/3661
https://www.youtube.com/watch?v=AJsrxBkV3kc
In Python you cannot reference the class in the class body, although in languages like Ruby you can do it.
In Python instead you can use a class decorator but that will be called once the class has initialized. Another way could be to use metaclass but it depends on what you are trying to achieve.
You can't with the specific syntax you're describing due to the time at which they are evaluated. The reason the example function given works is that the call to f(i-1) within the function body is because the name resolution of f is not performed until the function is actually called. At this point f exists within the scope of execution since the function has already been evaluated. In the case of the class example, the reference to the class name is looked up during while the class definition is still being evaluated. As such, it does not yet exist in the local scope.
Alternatively, the desired behavior can be accomplished using a metaclass like such:
class MetaA(type):
def __init__(cls):
some_func(cls)
class A(object):
__metaclass__=MetaA
# do something
# ...
Using this approach you can perform arbitrary operations on the class object at the time that the class is evaluated.
Maybe you could try calling __class__.
Right now I'm writing a code that calls a class method from within the same class.
It is working well so far.
I'm creating the class methods using something like:
#classmethod
def my_class_method(cls):
return None
And calling then by using:
x = __class__.my_class_method()
It seems most of the answers here are outdated. From python3.7:
from __future__ import annotations
Example:
$ cat rec.py
from __future__ import annotations
class MyList:
def __init__(self,e):
self.data = [e]
def add(self, e):
self.data.append(e)
return self
def score(self, other:MyList):
return len([e
for e in self.data
if e in other.data])
print(MyList(8).add(3).add(4).score(MyList(4).add(9).add(3)))
$ python3.7 rec.py
2
Nope. It works in a function because the function contents are executed at call-time. But the class contents are executed at define-time, at which point the class doesn't exist yet.
It's not normally a problem because you can hack further members into the class after defining it, so you can split up a class definition into multiple parts:
class A(object):
spam= 1
some_func(A)
A.eggs= 2
def _A_scramble(self):
self.spam=self.eggs= 0
A.scramble= _A_scramble
It is, however, pretty unusual to want to call a function on the class in the middle of its own definition. It's not clear what you're trying to do, but chances are you'd be better off with decorators (or the relatively new class decorators).
There isn't a way to do that within the class scope, not unless A was defined to be something else first (and then some_func(A) will do something entirely different from what you expect)
Unless you're doing some sort of stack inspection to add bits to the class, it seems odd why you'd want to do that. Why not just:
class A:
# do something
pass
some_func(A)
That is, run some_func on A after it's been made. Alternately, you could use a class decorator (syntax for it was added in 2.6) or metaclass if you wanted to modify class A somehow. Could you clarify your use case?
If you want to do just a little hacky thing do
class A(object):
...
some_func(A)
If you want to do something more sophisticated you can use a metaclass. A metaclass is responsible for manipulating the class object before it gets fully created. A template would be:
class AType(type):
def __new__(meta, name, bases, dct):
cls = super(AType, meta).__new__(meta, name, bases, dct)
some_func(cls)
return cls
class A(object):
__metaclass__ = AType
...
type is the default metaclass. Instances of metaclasses are classes so __new__ returns a modified instance of (in this case) A.
For more on metaclasses, see http://docs.python.org/reference/datamodel.html#customizing-class-creation.
If the goal is to call a function some_func with the class as an argument, one answer is to declare some_func as a class decorator. Note that the class decorator is called after the class is initialized. It will be passed the class that is being decorated as an argument.
def some_func(cls):
# Do something
print(f"The answer is {cls.x}")
return cls # Don't forget to return the class
#some_func
class A:
x = 1
If you want to pass additional arguments to some_func you have to return a function from the decorator:
def some_other_func(prefix, suffix):
def inner(cls):
print(f"{prefix} {cls.__name__} {suffix}")
return cls
return inner
#some_other_func("Hello", " and goodbye!")
class B:
x = 2
Class decorators can be composed, which results in them being called in the reverse order they are declared:
#some_func
#some_other_func("Hello", "and goodbye!")
class C:
x = 42
The result of which is:
# Hello C and goodbye!
# The answer is 42
What do you want to achieve? It's possible to access a class to tweak its definition using a metaclass, but it's not recommended.
Your code sample can be written simply as:
class A(object):
pass
some_func(A)
If you want to refer to the same object, just use 'self':
class A:
def some_func(self):
another_func(self)
If you want to create a new object of the same class, just do it:
class A:
def some_func(self):
foo = A()
If you want to have access to the metaclass class object (most likely not what you want), again, just do it:
class A:
def some_func(self):
another_func(A) # note that it reads A, not A()
Do remember that in Python, type hinting is just for auto-code completion therefore it helps IDE to infer types and warn user before runtime. In runtime, type hints almost never used(except in some cases) so you can do something like this:
from typing import Any, Optional, NewType
LinkListType = NewType("LinkList", object)
class LinkList:
value: Any
_next: LinkListType
def set_next(self, ll: LinkListType):
self._next = ll
if __name__ == '__main__':
r = LinkList()
r.value = 1
r.set_next(ll=LinkList())
print(r.value)
And as you can see IDE successfully infers it's type as LinkList:
Note: Since the next can be None, hinting this in the type would be better, I just didn't want to confuse OP.
class LinkList:
value: Any
next: Optional[LinkListType]
It's ok to reference the name of the class inside its body (like inside method definitions) if it's actually in scope... Which it will be if it's defined at top level. (In other cases probably not, due to Python scoping quirks!).
For on illustration of the scoping gotcha, try to instantiate Foo:
class Foo(object):
class Bar(object):
def __init__(self):
self.baz = Bar.baz
baz = 15
def __init__(self):
self.bar = Foo.Bar()
(It's going to complain about the global name 'Bar' not being defined.)
Also, something tells me you may want to look into class methods: docs on the classmethod function (to be used as a decorator), a relevant SO question. Edit: Ok, so this suggestion may not be appropriate at all... It's just that the first thing I thought about when reading your question was stuff like alternative constructors etc. If something simpler suits your needs, steer clear of #classmethod weirdness. :-)
Most code in the class will be inside method definitions, in which case you can simply use the name A.
I am making a set of classes that call functions that were defined in a different module. To know which function they must call, the function is stored as a variable of the class (or at least that was what I tried). However, when I try to call it, it automatically assumes that the function is a class method and passes "self" as an argument, which logically causes an error because the function received too many arguments. Do you know how can I avoid the function becoming a class method.
The code would be like:
# Module A
def func1(a):
print a
def func2(a):
print a,a
# Module B
from A import *
class Parent:
def func():
self.sonFunc("Hiya!")
class Son1:
sonFunc = func1
class Son2:
sonFunc = func2
so = Son1()
s.func()
# Should print "Hiya!"
s = Son2()
s.func()
# Should print "Hiya! Hiya!"
Thanks
What you are doing is somewhat of a nonstandard/odd thing, but this should work:
class Son_1(object):
son_func = staticmethod(func_1)
class Son_2(object):
son_func = staticmethod(func_2)
Normally, staticmethod is used as a decorator, but since decorators are just syntactical sugar, you can use them this way too.
An arguably cleaner but also more advanced way would be with a metaclass:
class HasSonMeta(type):
def __new__(cls, name, bases, attrs):
attrs['son_func'] = staticmethod(attrs.pop('__son_func__'))
return type.__new__(cls, name, bases, attrs)
class Son1(object):
__metaclass__ = HasSonMeta
__son_func__ = func_1
class Son2(object):
__metaclass__ = HasSonMeta
__son_func__ = func_2
Using this form, you could also define the function directly in the class (though then it gets even more confusing to anyone reading this code):
class Son3(object):
__metaclass__ = HasSonMeta
def __son_func__():
pass
While there could be a very narrow/obscure scenario where this would be an optimal implementation, you would probably be better served by putting your functions in a base class and then referring to (or overridding) them as needed in the children.