I'm trying to get the name of all methods in my class.
When testing how the inspect module works, i extraced one of my methods by obj = MyClass.__dict__['mymethodname'].
But now inspect.ismethod(obj) returns False while inspect.isfunction(obj) returns True, and i don't understand why. Is there some strange way of marking methods as methods that i am not aware of? I thought it was just that it is defined in the class and takes self as its first argument.
You are seeing some effects of the behind-the-scenes machinery of Python.
When you write f = MyClass.__dict__['mymethodname'], you get the raw implementation of "mymethodname", which is a plain function. To call it, you need to pass in an additional parameter, class instance.
When you write f = MyClass.mymethodname (note the absence of parentheses after mymethodname), you get an unbound method of class MyClass, which is an instance of MethodType that wraps the raw function you obtained above. To call it, you need to pass in an additional parameter, class instance.
When you write f = MyClass().mymethodname (note that i've created an object of class MyClass before taking its method), you get a bound method of an instance of class MyClass. You do not need to pass an additional class instance to it, since it's already stored inside it.
To get wrapped method (bound or unbound) by its name given as a string, use getattr, as noted by gnibbler. For example:
unbound_mth = getattr(MyClass, "mymethodname")
or
bound_mth = getattr(an_instance_of_MyClass, "mymethodname")
Use the source
def ismethod(object):
"""Return true if the object is an instance method.
Instance method objects provide these attributes:
__doc__ documentation string
__name__ name with which this method was defined
__func__ function object containing implementation of method
__self__ instance to which this method is bound"""
return isinstance(object, types.MethodType)
The first argument being self is just by convention. By accessing the method by name from the class's dict, you are bypassing the binding, so it appears to be a function rather than a method
If you want to access the method by name use
getattr(MyClass, 'mymethodname')
Well, do you mean that obj.mymethod is a method (with implicitly passed self) while Klass.__dict__['mymethod'] is a function?
Basically Klass.__dict__['mymethod'] is the "raw" function, which can be turned to a method by something called descriptors. This means that every function on a class can be both a normal function and a method, depending on how you access them. This is how the class system works in Python and quite normal.
If you want methods, you can't go though __dict__ (which you never should anyways). To get all methods you should do inspect.getmembers(Klass_or_Instance, inspect.ismethod)
You can read the details here, the explanation about this is under "User-defined methods".
From a comment made on #THC4k's answer, it looks like the OP wants to discriminate between built-in methods and methods defined in pure Python code. User defined methods are of types.MethodType, but built-in methods are not.
You can get the various types like so:
import inspect
import types
is_user_defined_method = inspect.ismethod
def is_builtin_method(arg):
return isinstance(arg, (type(str.find), type('foo'.find)))
def is_user_or_builtin_method(arg):
MethodType = types.MethodType
return isinstance(arg, (type(str.find), type('foo'.find), MethodType))
class MyDict(dict):
def puddle(self): pass
for obj in (MyDict, MyDict()):
for test_func in (is_user_defined_method, is_builtin_method,
is_user_or_builtin_method):
print [attr
for attr in dir(obj)
if test_func(getattr(obj, attr)) and attr.startswith('p')]
which prints:
['puddle']
['pop', 'popitem']
['pop', 'popitem', 'puddle']
['puddle']
['pop', 'popitem']
['pop', 'popitem', 'puddle']
You could use dir to get the name of available methods/attributes/etc, then iterate through them to see which ones are methods. Like this:
[ mthd for mthd in dir(FooClass) if inspect.ismethod(myFooInstance.__getattribute__(mthd)) ]
I'm expecting there to be a cleaner solution, but this could be something you could use if nobody else comes up with one. I'd like if I didn't have to use an instance of the class to use getattribute.
Related
I'm learning overloading in Python 3.X and to better understand the topic, I wrote the following code that works in 3.X but not in 2.X. I expected the below code to fail since I've not defined __call__ for class Test. But to my surprise, it works and prints "constructor called". Demo.
class Test:
def __init__(self):
print("constructor called")
#Test.__getitem__() #error as expected
Test.__call__() #this works in 3.X(but not in 2.X) and prints "constructor called"! WHY THIS DOESN'T GIVE ERROR in 3.x?
So my question is that how/why exactly does this code work in 3.x but not in 2.x. I mean I want to know the mechanics behind what is going on.
More importantly, why __init__ is being used here when I am using __call__?
In 3.x:
About attribute lookup, type and object
Every time an attribute is looked up on an object, Python follows a process like this:
Is it directly a part of the actual data in the object? If so, use that and stop.
Is it directly a part of the object's class? If so, hold onto that for step 4.
Otherwise, check the object's class for __getattr__ and __getattribute__ overrides, look through base classes in the MRO, etc. (This is a massive simplification, of course.)
If something was found in step 2 or 3, check if it has a __get__. If it does, look that up (yes, that means starting over at step 1 for the attribute named __get__ on that object), call it, and use its return value. Otherwise, use what was returned directly.
Functions have a __get__ automatically; it is used to implement method binding. Classes are objects; that's why it's possible to look up attributes in them. That is: the purpose of the class Test: block is to define a data type; the code creates an object named Test which represents the data type that was defined.
But since the Test class is an object, it must be an instance of some class. That class is called type, and has a built-in implementation.
>>> type(Test)
<class 'type'>
Notice that type(Test) is not a function call. Rather, the name type is pre-defined to refer to a class, which every other class created in user code is (by default) an instance of.
In other words, type is the default metaclass: the class of classes.
>>> type
<class 'type'>
One may ask, what class does type belong to? The answer is surprisingly simple - itself:
>>> type(type) is type
True
Since the above examples call type, we conclude that type is callable. To be callable, it must have a __call__ attribute, and it does:
>>> type.__call__
<slot wrapper '__call__' of 'type' objects>
When type is called with a single argument, it looks up the argument's class (roughly equivalent to accessing the __class__ attribute of the argument). When called with three arguments, it creates a new instance of type, i.e., a new class.
How does type work?
Because this is digging right at the core of the language (allocating memory for the object), it's not quite possible to implement this in pure Python, at least for the reference C implementation (and I have no idea what sort of magic is going on in PyPy here). But we can approximately model the type class like so:
def _validate_type(obj, required_type, context):
if not isinstance(obj, required_type):
good_name = required_type.__name__
bad_name = type(obj).__name__
raise TypeError(f'{context} must be {good_name}, not {bad_name}')
class type:
def __new__(cls, name_or_obj, *args):
# __new__ implicitly gets passed an instance of the class, but
# `type` is its own class, so it will be `type` itself.
if len(args) == 0: # 1-argument form: check the type of an existing class.
return obj.__class__
# otherwise, 3-argument form: create a new class.
try:
bases, attrs = args
except ValueError:
raise TypeError('type() takes 1 or 3 arguments')
_validate_type(name, str, 'type.__new__() argument 1')
_validate_type(bases, tuple, 'type.__new__() argument 2')
_validate_type(attrs, dict, 'type.__new__() argument 3')
# This line would not work if we were actually implementing
# a replacement for `type`, as it would route to `object.__new__(type)`,
# which is explicitly disallowed. But let's pretend it does...
result = super().__new__()
# Now, fill in attributes from the parameters.
result.__name__ = name_or_obj
# Assigning to `__bases__` triggers a lot of other internal checks!
result.__bases__ = bases
for name, value in attrs.items():
setattr(result, name, value)
return result
del __new__.__get__ # `__new__`s of builtins don't implement this.
def __call__(self, *args):
return self.__new__(self, *args)
# this, however, does have a `__get__`.
What happens (conceptually) when we call the class (Test())?
Test() uses function-call syntax, but it's not a function. To figure out what should happen, we translate the call into Test.__class__.__call__(Test). (We use __class__ directly here, because translating the function call using type - asking type to categorize itself - would end up in endless recursion.)
Test.__class__ is type, so this becomes type.__call__(Test).
type contains a __call__ directly (type is its own class, remember?), so it's used directly - we don't go through the __get__ descriptor. We call the function, with Test as self, and no other arguments. (We have a function now, so we don't need to translate the function call syntax again. We could - given a function func, func.__class__.__call__.__get__(func) gives us an instance of an unnamed builtin "method wrapper" type, which does the same thing as func when called. Repeating the loop on the method wrapper creates a separate method wrapper that still does the same thing.)
This attempts the call Test.__new__(Test) (since self was bound to Test). Test.__new__ isn't explicitly defined in Test, but since Test is a class, we don't look in Test's class (type), but instead in Test's base (object).
object.__new__(Test) exists, and does magical built-in stuff to allocate memory for a new instance of the Test class, make it possible to assign attributes to that instance (even though Test is a subtype of object, which disallows that), and set its __class__ to Test.
Similarly, when we call type, the same logical chain turns type(Test) into type.__class__.__call__(type, Test) into type.__call__(type, Test), which forwards to type.__new__(type, Test). This time, there is a __new__ attribute directly in type, so this doesn't fall back to looking in object. Instead, with name_or_obj being set to Test, we simply return Test.__class__, i.e., type. And with separate name, bases, attrs arguments, type.__new__ instead creates an instance of type.
Finally: what happens when we call Test.__call__() explicitly?
If there's a __call__ defined in the class, it gets used, since it's found directly. This will fail, however, because there aren't enough arguments: the descriptor protocol isn't used since the attribute was found directly, so self isn't bound, and so that argument is missing.
If there isn't a __call__ method defined, then we look in Test's class, i.e., type. There's a __call__ there, so the rest proceeds like steps 3-5 in the previous section.
In Python 3.x, every class is implicitely a child of the builtin class object. And at least in the CPython implementation, the object class has a __call__ method which is defined in its metaclass type.
That means that Test.__call__() is exactly the same as Test() and will return a new Test object, calling your custom __init__ method.
In Python 2.x classes are by default old-style classes and are not child of object. Because of that __call__ is not defined. You can get the same behaviour in Python 2.x by using new style classes, meaning by making an explicit inheritance on object:
# Python 2 new style class
class Test(object):
...
I am trying to unit test a block of code, and I'm running into issues with mocking the object's type to grab the right function from a dictionary.
For example:
my_func_dict = {
Foo: foo_func,
Bar: bar_func
FooBar: foobar_func
}
def generic_type_func(my_obj):
my_func = my_func_dict[type(my_obj)]
my_func()
With this code, I can swap between functions with a key lookup, and it's pretty efficient.
When I try to mock my_obj like this, I get a KeyError:
mock_obj = Mock(spec=Foo)
generic_type_func(mock_obj)
# OUTPUT:
# KeyError: <class 'unittest.mock.Mock'>
Because it's a mock type. Although, when I check isinstance(), it returns true:
is_instance_Foo = isinstance(mock_obj, Foo)
print(is_instance_foo)
# Output:
# True
Is there any way to retain the type() check, and using the dictionary lookup via a key, while still maintaining the ability to mock the input and return_type? Or perhaps a different pattern where I can retain the performance of a dictionary, but use isinstance() instead so I can mock the parameter? Looping over a list to check the type against every possible value is not preferred.
I managed to unit test this by moving the function to the parameter itself, and implicitly calling the function from the parent. I wanted to avoid this, because now the function manipulates the parent implicitly instead of explicitly from the parent itself. It looks like this now:
def generic_type_func(self, my_obj):
my_obj.my_func(self)
The function then modifies self as needed, but implicitly instead of an explicit function on the parent class.
This:
def my_func(self, parent):
self.foo_prop = parent
Rather than:
def my_foo_func(self, foo):
foo.foo_prop = self
This works fine with a mock, and I can mock that function easily. I've just hidden some of the functionality, and edit properties on the parent implicitly instead of explicitly from within the class I'm working in. Maybe this is preferable anyways, and it looks cleaner with less code on the parent class. Every instance must have my_func this way, which is enforced via an abstract base class.
In Python when we define class all its members including variables and methods also becomes attributes of that class. In following example MyClass1.a and MyClass1.mydef1 are attributes of class MyClass1.
class MyClass1:
a = 10
def mydef1(self):
return 0
ins1 = MyClass1() # create instance
print(MyClass1.a) # access class attribute which is class variable
print(MyClass1.mydef1) # No idea what to do with it so just printing
print(ins1.mydef1) # No idea what to do with it so just printing
Output
10
<function MyClass1.mydef1 at 0x0000000002122EA0>
<bound method MyClass1.mydef1 of <__main__.MyClass1 object at 0x000000000212D0F0>>
Here attribute a is a variable and it can be used like any other variable.
But mydef1 is a method, if it is not invoked and just used like MyClass1.mydef1 or ins1.mydef1, it returns object for that method(correct me if I am wrong).
So my question is, what can we do with the Class/instance methods without invoking it? Are there any use cases for it or is it just good to know thing?
An attribute of a class that happens to be a function becomes a method for instances or that class:
inst.foo(params, ...)
is internally translated into:
cls.foo(inst, params, ...)
That means that what is actually invoked is the attribute from the class of the instance, and the instance itself is prepended to the argument list. It is just Python syntax to invoke methods on objects.
In your example the correct uses would be:
print(MyClass1.mydef1(ins1)) # prints 0
print(ins1.mydef1()) # also prints 0
Well instance methods can be called with the appropriate parameters of course:
print(ins1.mydef1()) # no parameters, so empty parenthesis, this call should print "0" in your example instead of the method description
If you use it without the parenthesis, you are playing with reference to the function, I don't think you can have any use of it, except checking the list of methods available in a class or something like that.
In JavaScript, we can do the following to any object or function
const myFn = () => {};
Object.defineProperties(myFn, {
property: {
get: () => console.log('property accessed')
}
});
This will allow for a #property like syntax by defining a getter function for the property property.
myFn.property
// property accessed
Is there anything similar for functions in Python?
I know we can't use property since it's not a new-style class, and assigning a lambda with setattr will not work since it'll be a function.
Basically what I want to achieve is that whenever my_fn.property is to return a new instance of another class on each call.
What I currently have with setattr is this
setattr(my_fn, 'property', OtherClass())
My hopes are to design an API that looks like this my_fn.property.some_other_function().
I would prefer using a function as my_fn and not an instance of a class, even though I realize that it might be easier to implement.
Below is the gist of what I'm trying to achieve
def my_fn():
pass
my_fn = property('property', lambda: OtherClass())
my_fn.property
// will be a new instance of OtherClass on each call
It's not possible to do exactly what you want. The descriptor protocol that powers the property built-in is only invoked when:
The descriptor is defined on a class
The descriptor's name is accessed on an instance of said class
Problem is, the class behind functions defined in Python (aptly named function, exposed directly as types.FunctionType or indirectly by calling type() on any function defined at the Python layer) is a single shared, immutable class, so you can't add descriptors to it (and even if you could, they'd become attributes of every Python level function, not just one particular function).
The closest you can get to what you're attempting would be to define a callable class (defining __call__) that defines the descriptor you're interested in as well. Make a single instance of that class (you can throw away the class itself at this point) and it will behave as you expect. Make __call__ a staticmethod, and you'll avoid changing the signature to boot.
For example, the behavior you want could be achieved with:
class my_fn:
# Note: Using the name "property" for a property has issues if you define
# other properties later in the class; this is just for illustration
#property
def property(self):
return OtherClass()
#staticmethod
def __call__(...whatever args apply; no need for self...):
... function behavior goes here ...
my_fn = my_fn() # Replace class with instance of class that behaves like a function
Now you can call the "function" (really a functor, to use C++ parlance):
my_fn(...)
or access the property, getting a brand new OtherClass each time:
>>> type(my_fn.property) is type(my_fn.property)
True
>>> my_fn.property is my_fn.property
False
No, this isn't what you asked for (you seem set on having a plain function do this for you), but you're asking for a very JavaScript specific thing which doesn't exist in Python.
What you want is not currently possible, because the property would have to be set on the function type to be invoked correctly. And you are not allowed to monkeypatch the function type:
>>> type(my_fn).property = 'anything else'
TypeError: can't set attributes of built-in/extension type 'function'
The solution: use a callable class instead.
Note: What you want may become possible in Python 3.8 if PEP 575 is accepted.
Consider this example of a strategy pattern in Python (adapted from the example here). In this case the alternate strategy is a function.
class StrategyExample(object):
def __init__(self, strategy=None) :
if strategy:
self.execute = strategy
def execute(*args):
# I know that the first argument for a method
# must be 'self'. This is just for the sake of
# demonstration
print locals()
#alternate strategy is a function
def alt_strategy(*args):
print locals()
Here are the results for the default strategy.
>>> s0 = StrategyExample()
>>> print s0
<__main__.StrategyExample object at 0x100460d90>
>>> s0.execute()
{'args': (<__main__.StrategyExample object at 0x100460d90>,)}
In the above example s0.execute is a method (not a plain vanilla function) and hence the first argument in args, as expected, is self.
Here are the results for the alternate strategy.
>>> s1 = StrategyExample(alt_strategy)
>>> s1.execute()
{'args': ()}
In this case s1.execute is a plain vanilla function and as expected, does not receive self. Hence args is empty. Wait a minute! How did this happen?
Both the method and the function were called in the same fashion. How does a method automatically get self as the first argument? And when a method is replaced by a plain vanilla function how does it not get the self as the first argument?
The only difference that I was able to find was when I examined the attributes of default strategy and alternate strategy.
>>> print dir(s0.execute)
['__cmp__', '__func__', '__self__', ...]
>>> print dir(s1.execute)
# does not have __self__ attribute
Does the presence of __self__ attribute on s0.execute (the method), but lack of it on s1.execute (the function) somehow account for this difference in behavior? How does this all work internally?
You can read the full explanation here in the python reference, under "User defined methods". A shorter and easier explanation can be found in the python tutorial's description of method objects:
If you still don’t understand how methods work, a look at the implementation can perhaps clarify matters. When an instance attribute is referenced that isn’t a data attribute, its class is searched. If the name denotes a valid class attribute that is a function object, a method object is created by packing (pointers to) the instance object and the function object just found together in an abstract object: this is the method object. When the method object is called with an argument list, a new argument list is constructed from the instance object and the argument list, and the function object is called with this new argument list.
Basically, what happens in your example is this:
a function assigned to a class (as happens when you declare a method inside the class body) is... a method.
When you access that method through the class, eg. StrategyExample.execute you get an "unbound method": it doesn't "know" to which instance it "belongs", so if you want to use that on an instance, you would need to provide the instance as the first argument yourself, eg. StrategyExample.execute(s0)
When you access the method through the instance, eg. self.execute or s0.execute, you get a "bound method": it "knows" which object it "belongs" to, and will get called with the instance as the first argument.
a function that you assign to an instance attribute directly however, as in self.execute = strategy or even s0.execute = strategy is... just a plain function (contrary to a method, it doesn't pass via the class)
To get your example to work the same in both cases:
either you turn the function into a "real" method: you can do this with types.MethodType:
self.execute = types.MethodType(strategy, self, StrategyExample)
(you more or less tell the class that when execute is asked for this particular instance, it should turn strategy into a bound method)
or - if your strategy doesn't really need access to the instance - you go the other way around and turn the original execute method into a static method (making it a normal function again: it won't get called with the instance as the first argument, so s0.execute() will do exactly the same as StrategyExample.execute()):
#staticmethod
def execute(*args):
print locals()
You need to assign an unbound method (i.e. with a self parameter) to the class or a bound method to the object.
Via the descriptor mechanism, you can make your own bound methods, it's also why it works when you assign the (unbound) function to a class:
my_instance = MyClass()
MyClass.my_method = my_method
When calling my_instance.my_method(), the lookup will not find an entry on my_instance, which is why it will at a later point end up doing this: MyClass.my_method.__get__(my_instance, MyClass) - this is the descriptor protocol. This will return a new method that is bound to my_instance, which you then execute using the () operator after the property.
This will share method among all instances of MyClass, no matter when they were created. However, they could have "hidden" the method before you assigned that property.
If you only want specific objects to have that method, just create a bound method manually:
my_instance.my_method = my_method.__get__(my_instance, MyClass)
For more detail about descriptors (a guide), see here.
The method is a wrapper for the function, and calls the function with the instance as the first argument. Yes, it contains a __self__ attribute (also im_self in Python prior to 3.x) that keeps track of which instance it is attached to. However, adding that attribute to a plain function won't make it a method; you need to add the wrapper. Here is how (although you may want to use MethodType from the types module to get the constructor, rather than using type(some_obj.some_method).
The function wrapped, by the way, is accessible through the __func__ (or im_func) attribute of the method.
When you do self.execute = strategy you set the attribute to a plain method:
>>> s = StrategyExample()
>>> s.execute
<bound method StrategyExample.execute of <__main__.StrategyExample object at 0x1dbbb50>>
>>> s2 = StrategyExample(alt_strategy)
>>> s2.execute
<function alt_strategy at 0x1dc1848>
A bound method is a callable object that calls a function passing an instance as the first argument in addition to passing through all arguments it was called with.
See: Python: Bind an Unbound Method?