I'm creating a script that's actually an environment for other users to write code on.
I've declared several methods in a class and instantiate an object so users can use those methods as simple interpreter functions, like so:
from code import interact
class framework:
def method1(self, arg1):
# code for method1
def method2(self, arg2):
# code for method2
def main():
fw = framework()
# Aliases
method1 = fw.method1
method2 = fw.method2
interact(local=locals())
Since I don't want the user to call the methods with fw.method1(arg), I setup aliases. The problem is, since the framework class is under development, I have to keep updating the main script with new aliases for the methods I create.
Is there a simple way of getting rid of the "fw." part on the calls and have all the methods under class framework automatically visible under main?
You control the dictionary passed to interact, so put what you want in it without worrying with local variables in main:
d={}
for n in vars(framework):
if n.startswith('_'): continue # probably
x=getattr(fw,n)
if callable(x): d[n]=x
interact(local=d)
This works for regular, static, and class methods. It doesn’t see instance attributes (unless they’re shadowing a class attribute) or inherited methods.
The former matters only if the instance stores functions as attributes (in which case you might or might not want them available). The latter is convenient in that it omits object; if there are other base classes, they can easily be included by searching framework.__mro__ (and still omitting object).
I don't know what you plan to do with the functions, but if they are not static it will not likely work outside of the class:
fs = [func for func in dir(c) if callable(getattr(c, func)) and not func.startswith("__")]
for func in fs:
globals()[func] = getattr(c, func)
This will put all custom functions within the class c to global scope.
You basically want to do two things:
get a list of methods of a class instance
dynamically add functions to your local scope
The former can be achieved with the inspect module from the standard library, the latter by using vars.
Try the following:
import inspect
from code import interact
class framework:
def method1(self, arg1):
# code for method1
def method2(self, arg2):
# code for method2
def main():
fw = framework()
# get the methods of fw and update vars().
# getmembers returns a list of two-tuples with (<Name>, <function>)
methods = inspect.getmembers(fw, predicate=inspect.ismethod)
vars().update({method[0]: method[1] for method in methods})
interact(local=locals())
Here's something similar to #Davis Herring's answer, fleshed-out and repackaged:
#from code import interact
def interact(local):
local['method1'](42)
local['method2']('foobar')
class Framework:
def method1(self, arg1):
print('Framework.method1({!r}) called'.format(arg1))
def method2(self, arg2):
print('Framework.method2({!r}) called'.format(arg2))
def get_callables(obj):
return {name: getattr(obj, name) for name in vars(obj.__class__)
if callable(getattr(obj, name))}
def main():
fw = Framework()
# # Aliases
# method1 = fw.method1
# method2 = fw.method2
interact(local=get_callables(fw))
if __name__ == '__main__':
main()
Output:
Framework.method1(42) called
Framework.method2('foobar') called
Related
class StaticClass(object):
words = []
StaticClass.init()
#staticmethod
def init(file_name):
...
words.append('word')
...
#staticmethod
def fun():
print('fun')
test = StaticClass()
The error message is:
StaticClass.init()
NameError: name 'StaticClass' is not defined
Why can't I call the static function inside the class?
I want to use a class to do this and also want users to be able to do:
StaticClass.fun()
How to achieve the effect?
Why can't I call the static function inside the class?
As said in the comments, class bodies are executed in Python. It is like a zero-argument function that automatically runs once; then invokes type, passing it the class name, bases and a dict of class attributes from the local variables of that function; and only then assigns the result from type (i.e., a class object) to the name.
You can even put logic and other statements in there:
import random
# a class that sometimes fails to exist when you import the module.
class spam:
if __name__ != '__main__':
1 / random.randrange(3)
else:
print("thank you for running this as the main script.")
def __init__(self):
# etc.
As such, names have to be in scope. At the time that this code is running - because it runs immediately and automatically, rather than being delayed like a function - there isn't a StaticClass until after this code has completed. Consequently, the code inside can't reference the class itself.
To solve this, simply move the call to the end:
class StaticClass(object):
words = []
#staticmethod
def init(file_name):
...
words.append('word')
...
#staticmethod
def fun():
print('fun')
StaticClass.init()
By the time the body of the StaticClass class is being executed there is no reference to StaticClass, it doesn't exist yet. The body of the class is executed inside a namespace which is a dictionary. After that a new instance of type type which is here named StaticClass is created using that populated dictionary and added to the global namespaces. That's what class keyword roughly does in simple form.
Actually I don't know why you want this to work. Others already suggested best ways to deal with it but here is workaround if you want to call staticmethod as an initializer function when the class is being created and call it inside the class:
class StaticClass(object):
words = []
#staticmethod
def init(file_name, words=words):
words.append(file_name)
init.__get__(init)('word')
#staticmethod
def fun():
print('fun')
test = StaticClass()
StaticClass.fun()
print(StaticClass.words)
output:
fun
['word']
Yes it's really a mess, I just want to make it work to say why it didn't work before. You need to call init.__get__ because staticmethods are not callable. (I'm in Python 3.9.7) This way you init executed when the class is created. Also words is not accessible because the scope of the class is not enclosed the scope of the body of init, so I use it's reference as default parameter.
I'm unit testing a module I wrote and encountering a problem with default class object provided to a function that has a mock for it.
This is how it looks in high level:
main_file.py
class MainClass(object):
def main_func(self):
sub_class_obj = SubClass()
sub_class_obj.sub_func()
sub_file.py
class SubClass(object):
def sub_func(self, my_att=Helper(2)):
self.my_att = my_att
helpers.py
class Helper():
def __init__(self, my_val):
self.my_val = my_val
test.py
class TestClass(object):
#patch('sub_file.Helper', MockHelper)
def my_test(self):
main_class_obj = MainClass()
main_class_obj.main_func()
When I do that in a way that my_att is provided - all works well and the Mock is called, but when I don't and the default value is set - I get the original Helper class object.
Any idea how to make the default value for this attribute to receive the mock as well?
Thanks in advance!
The problem is that the default value is read at import time, so it is already set in the function before you patch Helper. The defaults are saved in the function object at that point.
You can, however, also patch the default arguments of your function (which can be accessed via __defaults__:
from sub_file import SubClass
class TestClass(object):
#patch('sub_file.Helper', MockHelper)
#patch.object(SubClass.sub_func, "__defaults__", (MockHelper(),))
def my_test(self):
main_class_obj = MainClass()
main_class_obj.main_func()
Note that __defaults__ has to be a tuple of arguments.
You could also use monkeypatch to do the same:
from sub_file import SubClass
class TestClass(object):
#patch('sub_file.Helper', MockHelper)
def my_test(self, monkeypatch):
monkeypatch.setattr(SubClass.sub_func, "__defaults__", (MockHelper(),)
main_class_obj = MainClass()
main_class_obj.main_func()
UPDATE:
I didn't realize that this would not work with Python 2. Apart from the other name of the default arguments (func_defaults instead of __defaults__) this would only work with standalone functions, but not with methods, as setattr is not supported in this case in Python 2. Here is a workaround for Python 2:
from sub_file import SubClass
class TestClass(object):
#patch('sub_file.Helper', MockHelper)
def my_test(self):
orig_sub_func = SubClass.sub_func
with patch.object(SubClass, "sub_func",
lambda o, attr=MockHelper(): orig_sub_func(o, attr)):
main_class_obj = MainClass()
main_class_obj.main_func()
This way, the original sub_func is replaced by a function that has its own default value, but otherwise delegates the functionality to the original function.
UPDATE 2:
Just saw the answer by #chepner, and it is correct: the best way would be to refactor your code accordingly. Only if you cannot do this, you try this answer.
Default values are created when the function is defined, not when it is called. It's too late to patch Helper in your test, because SubClass.__init__ has already been defined.
Rather than patching anything, though, re-write MainClass so that there is no hard-coded reference to SubClass: then you can create the proper instance yourself without relying on a default value.
You can pass an instance directly:
class MainClass(object):
def main_func(self, sub_class_obj):
sub_class_obj.sub_func()
class TestClass(object):
def my_test(self):
mock_obj = MockHelper()
main_class_obj = MainClass(mock_obj)
main_class_obj.main_func()
or take a factory function that will be called to create the subclass object.
class MainClass(object):
def main_func(self, factory=SubClass):
sub_class_obj = factory()
sub_class_obj.sub_func()
class TestClass(object):
def my_test(self):
main_class_obj = MainClass(lambda: SubClass(MockHelper()))
main_class_obj.main_func()
This would be the layout
some_function.py
def some_function():
print("some_function")
some_library.py
from some_function import some_function
class A:
def xxx(self):
some_function()
main.py
from some_library import A
from some_function import some_function
def new_some_function():
print("new_some_function")
if __name__ == '__main__':
some_function = new_some_function
a = A()
a.xxx()
In the class A, the method xxx, calls some_function, so is it possible to override it with something else, without re-implementing the entire class?
I think you are looking for monkey patching (means changing classes/modules dynamically while running). This way you don't need to overwrite the class A and use inheritance as suggested by other comments - you said you don't want that, so try this solution:
import some_class # import like this or will not work (cos of namespaces)
def new_some_function():
print("new_some_function")
if __name__ == '__main__':
# Import and use like this, other ways of import will not work.
# Because other way imports method to your namespace and then change it in your namespace,
# but you need to change it in the original namespace
some_class.some_function = new_some_function
That way replace the original method and even other classes will use it then. Be careful, if the original method is a class/instance method, you need to create new function with proper params, like this:
def new_some_function(self):
# for instance methods, you may add other args, but 'self' is important
def new_some_function(cls):
# for class methods, you may add other args, but 'cls' is important
You provide very little information about your use case here. As one of the comments points out, this might be a case for inheritance. If you are in a testing context, you may not want to use inheritance though, but you might rather want to use a mock-object.
Here is the inheritance version:
from some_library import A
def new_some_function():
print("new_some_function")
class B(A):
def xxx(self):
new_some_function()
if __name__ == '__main__':
a = B()
a.xxx()
Note, how class B derives from class A through the class B(A) statement. This way, class B inherits all functionality from A and the definition of class B only consists of the parts where B differs from A. In your example, that is the fact that the xxx method should call new_some_function instead of some_function.
Here is the mock version:
from unittest import mock
from some_library import A
def new_some_function():
print("new_some_function")
if __name__ == '__main__':
with mock.patch('some_library.some_function') as mock_some_function:
mock_some_function.side_effect = new_some_function
a = A()
a.xxx()
As mentioned above this approach is mostly useful if you are in a testing context and if some_function does something costly and/or unpredictable. In order to test code that involves a call to some_function, you may temporarily want to replace some_function by something else, that is cheap to call and behaves in a predictable way. In fact, for this scenario, replacing some_function by new_some_function might even be more than what is actually needed. Maybe, you just want an empty hull that can be called and that always returns the same value (instead of the side_effect line, you can specify a constant .return_value in the above code example). One of the key functionalities of mock objects is that you can later check if that function has been called. If testing is your use case, I would very much recommend looking at the documentation of the python mock module.
Note that the example uses the mock.patch context manager. This means that within the managed context (i.e. the block inside the with-statement) some_library.some_function is replaced by a mock object, but once you leave the managed context, the original functionality is put back in place.
You may just create another class and override the method you need.
Just as an example:
class myInt(int):
def __pow__(self, x):
return 0
a = myInt(10)
a+10 # 20
a**2 # 0
In this case a is an int and has access to all the method of the int class, but will use the __pow__ method I've defined.
What you need is inheritance, You can subclass a class and with super method you can inherit all the parent class functions. If you want to override parent class functions, you just need to provide a different implementation by the same name in child class.
from some_library import A
from some_function import some_function
def new_some_function():
print("new_some_function")
class B(A):
def __init__(*args, **kwargs):
super().__init__(self)
pass
def xxx(self):
new_some_function()
if __name__ == '__main__':
a = B()
a.xxx()
Output:
new_some_function
you syntax may differ depending upon the python version.
In python3
class B(A):
def __init__(self):
super().__init__()
In Python 2,
class B(A):
def __init__(self):
super(ChildB, self).__init__()
Is it OK to define functions outside a particular class, use them in the class, and then import that class elsewhere and use it?
Are there any risks associated with doing that, rather than making all functions methods of the class?
I'm writing this code in python 2.7
For example, make a class like this:
def func(a):
return a
class MyClass():
def class_func(self, thing):
return func(thing)
Then import MyClass into another python script and use its class_func method.
Doing this is okay, and in fact a language feature of python. Functions have access to names of the scope they are defined in, regardless of where they are called from.
For example, you can also do something like this:
factor = 2
def multiply(num):
return num*factor
See this post for some background information.
The "risk" associated with this is that the outside name is explicitly not under your control. It can be freely modified by other parts of your program, without the implication being clear.
Consider this example:
def func(a):
return a
class MyClass(object): # note: you should inherit from object in py2.X!
def class_func(self, thing):
return func(thing)
myinstance = MyClass()
foo = myinstance.class_func(1)
def func(a):
return str(a)
bar = myinstance.class_func(1)
Here, foo and bar will be different, namely the integer 1 and the string "1".
Usually, making this possible is the entire point of using such a structure, however.
It's ok, but if func uses only in MyClass it can be helpful to make it staticmethod and place inside MyClass near class_func:
class MyClass(object):
#staticmethod
def _func(a):
return a
def class_func(self, thing):
return type(self)._func(thing)
I am making a set of classes that call functions that were defined in a different module. To know which function they must call, the function is stored as a variable of the class (or at least that was what I tried). However, when I try to call it, it automatically assumes that the function is a class method and passes "self" as an argument, which logically causes an error because the function received too many arguments. Do you know how can I avoid the function becoming a class method.
The code would be like:
# Module A
def func1(a):
print a
def func2(a):
print a,a
# Module B
from A import *
class Parent:
def func():
self.sonFunc("Hiya!")
class Son1:
sonFunc = func1
class Son2:
sonFunc = func2
so = Son1()
s.func()
# Should print "Hiya!"
s = Son2()
s.func()
# Should print "Hiya! Hiya!"
Thanks
What you are doing is somewhat of a nonstandard/odd thing, but this should work:
class Son_1(object):
son_func = staticmethod(func_1)
class Son_2(object):
son_func = staticmethod(func_2)
Normally, staticmethod is used as a decorator, but since decorators are just syntactical sugar, you can use them this way too.
An arguably cleaner but also more advanced way would be with a metaclass:
class HasSonMeta(type):
def __new__(cls, name, bases, attrs):
attrs['son_func'] = staticmethod(attrs.pop('__son_func__'))
return type.__new__(cls, name, bases, attrs)
class Son1(object):
__metaclass__ = HasSonMeta
__son_func__ = func_1
class Son2(object):
__metaclass__ = HasSonMeta
__son_func__ = func_2
Using this form, you could also define the function directly in the class (though then it gets even more confusing to anyone reading this code):
class Son3(object):
__metaclass__ = HasSonMeta
def __son_func__():
pass
While there could be a very narrow/obscure scenario where this would be an optimal implementation, you would probably be better served by putting your functions in a base class and then referring to (or overridding) them as needed in the children.