Listing Implemented Methods in Python Class - python

I am working on a piece of scientific code in which we want to implement several solvers and compare them. We are using a config file in which we can declare the name of the solver we wish to apply. As we are constantly adding new solvers, I've contained them in a class such that we can pull the name from the config file and apply that one with getattr, without having to add anything new to the code other than the solver method itself. I've not implemented any attributes or any methods other than the solvers in this class.
I've also implemented an error message in case the chosen solver doesn't exist. The whole block looks like this:
try:
solver = getattr(Solvers,control['Solver'])
except AttributeError:
print('\n Invalid Solver Choice! Implemented solvers are: \n' \
+ str(set(dir(Solvers)) - set(dir(Empty)))) # Implemented solvers
raise
solver(inputs) # Call the desired solver
This is convenient as it automatically updates our error handling with the addition of a new method. My question relates to the error message there. Specifically, I want to return a list of the implemented solvers and only of the implemented solvers.
It doesn't suffice to simply list the output of dir(Solvers), since this includes a lot of other methods like __init__. Similarly, I can't set-subtract the results of dir(object), since this still ends up returning a few extra things like __dict__ and __module__. This is why I have the class Empty, which is just:
class Empty:
pass
I'm wondering if there exists a more elegant way to implement this than the kludgey Empty class.

One way is:
set(A.__dict__) - set(type.__dict__)
However, it will still return __weakref__. Instead, you can use:
set(A.__dict__) - set(type("", (), {}).__dict__)
It compares your class to type("", (), {}). This creates a new class object, like your Empty, but is a bit more subtle. For an example class:
>>> class A: pass
...
It gives:
>>> set(A.__dict__) - set(type("", (), {}).__dict__)
set()
And for:
>>> class B:
... def f(self): pass
...
It returns:
>>> set(B.__dict__) - set(type("", (), {}).__dict__)
{'f'}
You can do it with dir like:
>>> set(dir(B)) - set(dir(type("", (), {})))
{'f'}

Explicit is better than implicit, and you may want to have non-solver methods in you class. The simplest explicit-yet-dry solution is to just use a decorator to mark which methods should be considered as "solver" methods:
def solver(fun):
fun.is_solver = True
return fun
class Solvers(object):
#solver
def foo(self):
return "foo"
#solver
def bar(self):
return "bar"
#classmethod
def list_solvers(cls):
return [name for name, attr
in cls.__dict__.items()
if getattr(attr, "is_solver", False)]

Related

Python inheritable functions

Is there a way in Python to either ``type'' functions or for functions to
inherit test suites? I am doing some work evaluating several different
implementations of different functions with various criteria (for
example I may evaluate different sort functions based on speed for the
array size and memory requirements). And I
want to be able to automate the testing of the functions. So I would
like a way to identify a function as being an implementation of
a certain operator so that the test suite can just grab all functions
that are implementation of that operator and run them through the
tests.
My initial thought was to use classes and subclasses, but the class
syntax is a bit finnicky for this purpose because I would first have
to create an instance of the class before I could call it as a
function... that is unless there is a way to allow init to return
a type other than None.
Can metaclasses or objects be used in this fashion?
Functions are first class objects in Python and you can treat them as such, e.g. add some metadata via setattr:
>>> def function1(a):
... return 1
...
>>> type(function1)
<type 'function'>
>>> setattr(function1, 'mytype', 'F1')
>>> function1.mytype
'F1'
Or the same using a simple parametrized decorator:
def mytype(t):
def decorator(f):
f.mytype = t
return f
return decorator
#mytype('F2')
def function2(a, b, c):
return 2
I apologize, as I cannot comment but just to clarify you stated " I would first have to create an instance of the class before I could call it as a function... " Does this not accomplish what you are trying to do?
class functionManager:
def __init__(testFunction1 = importedTests.testFunction1):
self.testFunction1() = testFunction1
functionManager = functionManager()
Then just include the line from functionManagerFile import functionManager wherever you wanna use it?

Python: How can I get a list of function names from within __getattr__ function?

How can I get the list of class functions from within __getattr__ function?
Python v2.7 if it matters.
Trying to use dir within __getattr__ leads to infinite recursion.
class Hal(object):
def __getattr__(self, name):
print 'I don\'t have a %s function' % name
names = dir(self) # <-- infinite recursion happens here
print 'My functions are: %s' % ', '.join(names)
exit()
def close_door(self):
pass
x = Hal()
x.open_door()
Here's the output I want:
I don't have a open_door function
My functions are: close_door, __getattr__, __init__, __doc__, ...
Any other solution which gets me the output I want will work fine. I want to do fuzzy string matching in the case when a function doesn't exist to try to suggest what the user might have meant.
This works I think:
import types
class Hal(object):
def __getattr__(self, name):
print ('I don\'t have a %s function' % name)
funcs = (name for name,func in self.__class__.__dict__.items() if isinstance(func,types.FunctionType))
#names = dir(self) # <-- infinite recursion happens here
print ('My functions are: %s' % ', '.join(str(f) for f in funcs))
exit()
#staticmethod
def foo():
pass
#classmethod
def bar(cls):
pass
def qux(self):
pass
def close_door(self):
pass
x = Hal()
x.foo = 'bar'
x.open_door()
is there any reason why you can't do this?
names = dir(self.__class__)
are you expecting consumers to extend instances of Hal to have custom methods?
if you only want methods you've implemented, with no built-ins listed, you could try this too:
names = [prop for prop in dir(self.__class__) if prop[1] != "_"]
names = self.__class__.__dict__
possibly?
>>> class A:
... def hello(self,x):
... print "hello ",x
... def my_dir(self):
... print self.__class__.__dict__
...
>>> A().my_dir()
{'__module__': '__main__', 'my_dir': <function my_dir at 0x029A5AB0>, 'hello': <
function hello at 0x029A5CB0>, '__doc__': None}
One solution is to make a copy of the dir before adding the __getattr__ method:
class Hal(object):
def __init__(self):
self._names = dir(self)
def __getattr__(self, name):
print self.names
self.__getattr__ = __getattr__
However, for simple cases, you can just call dir (and likewise getattr, or inspect.getmembers, or whatever) on your class object to solve the problem. This doesn't work if instance can have methods added after construction, but if that's not an issue, it's easy:
names = dir(self.__class__)
However you get the names, to filter for methods, there are a few things to do.
First, you can use isinstance on getattr(self, name) and make sure it's a method-wrapper (or get the type of the bound version and make sure it's an instancemethod). If you get the values directly out of self.__class__.__dict__, you don't get exactly the same thing as if you get the names in your favorite way and call either getattr(self, name) or getattr(self.__class__, name). In particular, an instance method will show up as a function, which is easier to test for than a method-wrapper. Although some of the other cases now get harder to detect.
At any rate, nothing based on type will find things that act like methods but aren't (e.g., because you've assigned a built-in function directly to the object, wrapped something in certain kinds of decorators, written custom descriptors, used a class with a __callable__ method as a function, etc.). If you're doing anything fancy (or worry that someone might later add something fancy), you really need to test whether you can explicitly bind the member (or fake-bind it to None), and then check if the result is callable, and then possibly do further tests to make sure it's callable properly (because otherwise you'll get fooled by #staticmethods and similar things). Really, if this comes up (and you've really thought through your design and convinced yourself and at least one other person that it isn't insane…), you should test everything you can think of against every case you have…
If you want to know if the methods are defined in Hal or the instance as opposed to object or another base class, there are a few ways to do this, but the simplest is to just subtract out the members of the base classes. (Of course if you don't care about methods defined in the instance, Hal.__dict__ already has what you want.)

Python: can a decorator determine if a function is being defined inside a class?

I'm writing a decorator, and for various annoying reasons[0] it would be expedient to check if the function it is wrapping is being defined stand-alone or as part of a class (and further which classes that new class is subclassing).
For example:
def my_decorator(f):
defined_in_class = ??
print "%r: %s" %(f, defined_in_class)
#my_decorator
def foo(): pass
class Bar(object):
#my_decorator
def bar(self): pass
Should print:
<function foo …>: False
<function bar …>: True
Also, please note:
At the point decorators are applied the function will still be a function, not an unbound method, so testing for instance/unbound method (using typeof or inspect) will not work.
Please only offer suggestions that solve this problem — I'm aware that there are many similar ways to accomplish this end (ex, using a class decorator), but I would like them to happen at decoration time, not later.
[0]: specifically, I'm writing a decorator that will make it easy to do parameterized testing with nose. However, nose will not run test generators on subclasses of unittest.TestCase, so I would like my decorator to be able to determine if it's being used inside a subclass of TestCase and fail with an appropriate error. The obvious solution - using isinstance(self, TestCase) before calling the wrapped function doesn't work, because the wrapped function needs to be a generator, which doesn't get executed at all.
Take a look at the output of inspect.stack() when you wrap a method. When your decorator's execution is underway, the current stack frame is the function call to your decorator; the next stack frame down is the # wrapping action that is being applied to the new method; and the third frame will be the class definition itself, which merits a separate stack frame because the class definition is its own namespace (that is wrapped up to create a class when it is done executing).
I suggest, therefore:
defined_in_class = (len(frames) > 2 and
frames[2][4][0].strip().startswith('class '))
If all of those crazy indexes look unmaintainable, then you can be more explicit by taking the frame apart piece by piece, like this:
import inspect
frames = inspect.stack()
defined_in_class = False
if len(frames) > 2:
maybe_class_frame = frames[2]
statement_list = maybe_class_frame[4]
first_statment = statement_list[0]
if first_statment.strip().startswith('class '):
defined_in_class = True
Note that I do not see any way to ask Python about the class name or inheritance hierarchy at the moment your wrapper runs; that point is "too early" in the processing steps, since the class creation is not yet finished. Either parse the line that begins with class yourself and then look in that frame's globals to find the superclass, or else poke around the frames[1] code object to see what you can learn — it appears that the class name winds up being frames[1][0].f_code.co_name in the above code, but I cannot find any way to learn what superclasses will be attached when the class creation finishes up.
A little late to the party here, but this has proven to be a reliable means of determining if a decorator is being used on a function defined in a class:
frames = inspect.stack()
className = None
for frame in frames[1:]:
if frame[3] == "<module>":
# At module level, go no further
break
elif '__module__' in frame[0].f_code.co_names:
className = frame[0].f_code.co_name
break
The advantage of this method over the accepted answer is that it works with e.g. py2exe.
Some hacky solution that I've got:
import inspect
def my_decorator(f):
args = inspect.getargspec(f).args
defined_in_class = bool(args and args[0] == 'self')
print "%r: %s" %(f, defined_in_class)
But it relays on the presence of self argument in function.
you can use the package wrapt to check for
- instance/class methods
- classes
- freestanding functions/static methods:
See the project page of wrapt: https://pypi.org/project/wrapt/
You could check if the decorator itself is being called at the module level or nested within something else.
defined_in_class = inspect.currentframe().f_back.f_code.co_name != "<module>"
I think the functions in the inspect module will do what you want, particularly isfunction and ismethod:
>>> import inspect
>>> def foo(): pass
...
>>> inspect.isfunction(foo)
True
>>> inspect.ismethod(foo)
False
>>> class C(object):
... def foo(self):
... pass
...
>>> inspect.isfunction(C.foo)
False
>>> inspect.ismethod(C.foo)
True
>>> inspect.isfunction(C().foo)
False
>>> inspect.ismethod(C().foo)
True
You can then follow the Types and Members table to access the function inside the bound or unbound method:
>>> C.foo.im_func
<function foo at 0x1062dfaa0>
>>> inspect.isfunction(C.foo.im_func)
True
>>> inspect.ismethod(C.foo.im_func)
False

How do I dynamically create a function with the same signature as another function?

I'm busy creating a metaclass that replaces a stub function on a class with a new one with a proper implementation. The original function could use any signature. My problem is that I can't figure out how to create a new function with the same signature as the old one. How would I do this?
Update
This has nothing to do with the actual question which is "How do I dynamically create a function with the same signature as another function?" but I'm adding this to show why I can't use subclasses.
I'm trying to implement something like Scala Case Classes in Python. (Not the pattern matching aspect just the automatically generated properties, eq, hash and str methods.)
I want something like this:
>>> class MyCaseClass():
... __metaclass__ = CaseMetaClass
... def __init__(self, a, b):
... pass
>>> instance = MyCaseClass(1, 'x')
>>> instance.a
1
>>> instance.b
'x'
>>> str(instance)
MyCaseClass(1, 'x')
As far as I can see, there is no way to that with subclasses.
I believe functools.wraps does not reproduce the original call signature. However, Michele Simionato's decorator module does:
import decorator
class FooType(type):
def __init__(cls,name,bases,clsdict):
#decorator.decorator
def modify_stub(func, *args,**kw):
return func(*args,**kw)+' + new'
setattr(cls,'stub',modify_stub(clsdict['stub']))
class Foo(object):
__metaclass__=FooType
def stub(self,a,b,c):
return 'original'
foo=Foo()
help(foo.stub)
# Help on method stub in module __main__:
# stub(self, a, b, c) method of __main__.Foo instance
print(foo.stub(1,2,3))
# original + new
use functools.wraps
>>> from functools import wraps
>>> def f(a,b):
return a+b
>>> #wraps(f)
def f2(*args):
print(args)
return f(*args)
>>> f2(2,5)
(2, 5)
7
It is possible to do this, using inspect.getargspecs. There's even a PEP in place to make it easier.
BUT -- this is not a good thing to do. Can you imagine how much of a debugging/maintenance nightmare it would be to have your functions dynamically created at runtime -- and not only that, but done so by a metaclass?! I don't understand why you have to replace the stub dynamically; can't you just change the code when you want to change the function? I mean, suppose you have a class
class Spam( object ):
def ham( self, a, b ):
return NotImplemented
Since you don't know what it's meant to do, the metaclass can't actually implement any functionality. If you knew what ham were meant to do, you could do it in ham or one of its parent classes, instead of returning NotImplemented.

Using class/static methods as default parameter values within methods of the same class

I'd like to do something like this:
class SillyWalk(object):
#staticmethod
def is_silly_enough(walk):
return (False, "It's never silly enough")
def walk(self, appraisal_method=is_silly_enough):
self.do_stuff()
(was_good_enough, reason) = appraisal_method(self)
if not was_good_enough:
self.execute_self_modifying_code(reason)
return appraisal_method
def do_stuff(self):
pass
def execute_self_modifying_code(self, problem):
from __future__ import deepjuju
deepjuju.kiss_booboo_better(self, problem)
with the idea being that someone can do
>>> silly_walk = SillyWalk()
>>> appraise = walk()
>>> is_good_walk = appraise(silly_walk)
and also get some magical machine learning happening; this last bit is not of particular interest to me, it was just the first thing that occurred to me as a way to exemplify the use of the static method in both an in-function context and from the caller's perspective.
Anyway, this doesn't work, because is_silly_enough is not actually a function: it is an object whose __get__ method will return the original is_silly_enough function. This means that it only works in the "normal" way when it's referenced as an object attribute. The object in question is created by the staticmethod() function that the decorator puts in between SillyWalk's is_silly_enough attribute and the function that's originally defined with that name.
This means that in order to use the default value of appraisal_method from within either SillyWalk.walk or its caller, we have to either
call appraisal_method.__get__(instance, owner)(...) instead of just calling appraisal_method(...)
or assign it as the attribute of some object, then reference that object property as a method that we call as we would call appraisal_method.
Given that neither of these solutions seem particularly Pythonic™, I'm wondering if there is perhaps a better way to get this sort of functionality. I essentially want a way to specify that a method should, by default, use a particular class or static method defined within the scope of the same class to carry out some portion of its daily routine.
I'd prefer not to use None, because I'd like to allow None to convey the message that that particular function should not be called. I guess I could use some other value, like False or NotImplemented, but it seems a) hackety b) annoying to have to write an extra couple of lines of code, as well as otherwise-redundant documentation, for something that seems like it could be expressed quite succinctly as a default parameter.
What's the best way to do this?
Maybe all you need is to use the function (and not the method) in the first place?
class SillyWalk(object):
def is_silly_enough(walk):
return (False, "It's never silly enough")
def walk(self, appraisal_function=is_silly_enough):
self.do_stuff()
(was_good_enough, reason) = appraisal_function(self)
if not was_good_enough:
self.execute_self_modifying_code(reason)
return appraisal_function
def do_stuff(self):
pass
def execute_self_modifying_code(self, problem):
deepjuju.kiss_booboo_better(self, problem)
Note that the default for appraisal_function will now be a function and not a method, even though is_silly_enough will be bound as a class method once the class is created (at the end of the code).
This means that
>>> SillyWalk.is_silly_enough
<unbound method SillyWalk.is_silly_enough>
but
>>> SillyWalk.walk.im_func.func_defaults[0] # the default argument to .walk
<function is_silly_enough at 0x0000000002212048>
And you can call is_silly_enough with a walk argument, or call a walk instance with .is_silly_enough().
If you really wanted is_silly_enough to be a static method, you could always add
is_silly_enough = staticmethod(is_silly_enough)
anywhere after the definition of walk.
I ended up writing an (un)wrapper function, to be used within function definition headers, eg
def walk(self, appraisal_method=unstaticmethod(is_silly_enough)):
This actually seems to work, at least it makes my doctests that break without it pass.
Here it is:
def unstaticmethod(static):
"""Retrieve the original function from a `staticmethod` object.
This is intended for use in binding class method default values
to static methods of the same class.
For example:
>>> class C(object):
... #staticmethod
... def s(*args, **kwargs):
... return (args, kwargs)
... def m(self, args=[], kwargs={}, f=unstaticmethod(s)):
... return f(*args, **kwargs)
>>> o = C()
>>> o.s(1, 2, 3)
((1, 2, 3), {})
>>> o.m((1, 2, 3))
((1, 2, 3), {})
"""
# TODO: Technically we should be passing the actual class of the owner
# instead of `object`, but
# I don't know if there's a way to get that info dynamically,
# since the class is not actually declared
# when this function is called during class method definition.
# I need to figure out if passing `object` instead
# is going to be an issue.
return static.__get__(None, object)
update:
I wrote doctests for the unstaticmethod function itself; they pass too. I'm still not totally sure that this is an actual smart thing to do, but it does seem to work.
Not sure if I get exactly what you're after, but would it be cleaner to use getattr?
>>> class SillyWalk(object):
#staticmethod
def ise(walk):
return (False, "boo")
def walk(self, am="ise"):
wge, r = getattr(self, am)(self)
print wge, r
>>> sw = SillyWalk()
>>> sw.walk("ise")
False boo

Categories