Attribute added with setattr not showing up in help() - python

I've got a class that I'm adding a help-function to with setattr. The function is a properly created instancemethod and works like a charm.
import new
def add_helpfunc(obj):
def helpfunc(self):
"""Nice readable docstring"""
#code
setattr(obj, "helpfunc",
new.instancemethod(helpfunc, obj, type(obj)))
However, when calling help on the object instance, the new method is not listed as a member of the object. I thought help (i.e. pydoc) used dir(), but dir() works and not help().
What do I have to do to get the help information updated?

I there a specific reason you do it the complicate way? Why not just doing it like this:
def add_helpfunc(obj):
def helpfunc(self):
"""Nice readable docstring"""
#code
obj.helpfunc = helpfunc
Adding the method this way also fixes your help-problem if I am not wrong...
Example:
>>> class A:
... pass
...
>>> add_helpfunc(A)
>>> help(A.helpfunc)
Help on method helpfunc in module __main__:
helpfunc(self) unbound __main__.A method
Nice readable docstring
>>> help(A().helpfunc)
Help on method helpfunc in module __main__:
helpfunc(self) method of __main__.A instance
Nice readable docstring

Related

Why python class attribute can be initialized without denoting 'self' keyword?

class Foo():
def __init__(self):
pass
def makeInstanceAttribute(oops):
oops.x = 10
f = Foo()
f.makeInstanceAttribute()
print(f.x)
and it's printing 10, how does it work? Why oops has same effect of being self?
Quoting the Python documentation:
Often, the first argument of a method is called self. This is nothing more than a convention: the name self has absolutely no special meaning to Python. Note, however, that by not following the convention your code may be less readable to other Python programmers, and it is also conceivable that a class browser program might be written that relies upon such a convention.
self is just a convention. It can be named anyway you want. What actually is passed to the function as the first argument is the object instance. Consider this code:
class A(object):
def self_test(self):
print self
def foo(oops):
print oops
>>> a = A()
>>> a.self_test()
<__main__.A object at 0x03CB18D0>
>>> a.foo()
<__main__.A object at 0x03CB18D0>
>>>
This question is probably related to yours, you might find it helpful.

When using unittest.mock.patch, why is autospec not True by default?

When you patch a function using mock, you have the option to specify autospec as True:
If you set autospec=True then the mock with be created with a spec
from the object being replaced. All attributes of the mock will also
have the spec of the corresponding attribute of the object being
replaced. Methods and functions being mocked will have their arguments
checked and will raise a TypeError if they are called with the wrong
signature.
(http://www.voidspace.org.uk/python/mock/patch.html)
I'm wondering why this isn't the default behaviour? Surely we would almost always want to catch passing incorrect parameters to any function we patch?
The only clear way to explain this, is to actually quote the documentation on the downside of using auto-speccing and why you should be careful when using it:
This isn’t without caveats and limitations however, which is why it is
not the default behaviour. In order to know what attributes are
available on the spec object, autospec has to introspect (access
attributes) the spec. As you traverse attributes on the mock a
corresponding traversal of the original object is happening under the
hood. If any of your specced objects have properties or descriptors
that can trigger code execution then you may not be able to use
autospec. On the other hand it is much better to design your objects
so that introspection is safe [4].
A more serious problem is that it is common for instance attributes to
be created in the init method and not to exist on the class at
all. autospec can’t know about any dynamically created attributes and
restricts the api to visible attributes.
I think the key takeaway here is to note this line: autospec can’t know about any dynamically created attributes and restricts the api to visible attributes
So, to help being more explicit with an example of where autospeccing breaks, this example taken from the documentation shows this:
>>> class Something:
... def __init__(self):
... self.a = 33
...
>>> with patch('__main__.Something', autospec=True):
... thing = Something()
... thing.a
...
Traceback (most recent call last):
...
AttributeError: Mock object has no attribute 'a'
As you can see, auto-speccing has no idea that there is an attribute a being created when creating your Something object.
There is nothing wrong with assigning a value to your instance attribute.
Observe the below functional example:
import unittest
from mock import patch
def some_external_thing():
pass
def something(x):
return x
class MyRealClass:
def __init__(self):
self.a = some_external_thing()
def test_thing(self):
return something(self.a)
class MyTest(unittest.TestCase):
def setUp(self):
self.my_obj = MyRealClass()
#patch('__main__.some_external_thing')
#patch('__main__.something')
def test_my_things(self, mock_something, mock_some_external_thing):
mock_some_external_thing.return_value = "there be dragons"
self.my_obj.a = mock_some_external_thing.return_value
self.my_obj.test_thing()
mock_something.assert_called_once_with("there be dragons")
if __name__ == '__main__':
unittest.main()
So, I'm just saying for my test case I want to make sure that the some_external_thing() method does not affect the behaviour of my unittest, so I'm just assigning my instance attribute the mock per mock_some_external_thing.return_value = "there be dragons".
Answering my own question many years later - another reason is speed.
Depending on how complex your object is, it can be that using autospec can slow your test down significantly. I've found this particularly when patching Django models.
The action of autospeccing itself can execute code, for example via the invocation of descriptors.
>>> class A:
... #property
... def foo(self):
... print("rm -rf /")
...
>>> a = A()
>>> with mock.patch("__main__.a", autospec=False) as m:
... pass
...
>>> with mock.patch("__main__.a", autospec=True) as m:
... pass
...
rm -rf /
Therefore, this is a problematic feature to enable by default and
is opt-in only.

How do Python tell “this is called as a function”?

A callable object is supposed to be so by defining __call__. A class is supposed to be an object… or at least with some exceptions. This exception is what I'm failing to formally clarify, thus this question posted here.
Let A be a simple class:
class A(object):
def call(*args):
return "In `call`"
def __call__(*args):
return "In `__call__`"
The first function is purposely named “call”, to make clear the purpose is the comparison with the other.
Let's instantiate it and forget about the expression it implies:
a = A() # Think of it as `a = magic` and forget about `A()`
Now what's worth:
print(A.call())
print(a.call())
print(A())
print(a())
Result in:
>>> In `call`
>>> In `call`
>>> <__main__.A object at 0xNNNNNNNN>
>>> In `__call__`
The output (third statement not running __call__) does not come as a surprise, but when I think every where it is said “Python class are objects”…
This, more explicit, however run __call__
print(A.__call__())
print(a.__call__())
>>> “In `__call__`”
>>> “In `__call__`”
All of this is just to show how finally A() may looks strange.
There are exception in Python rules, but the documentation about “object.call” does not say a lot about __call__… not more than that:
3.3.5. Emulating callable objects
object.__call__(self[, args...])
Called when the instance is “called” as a function; […]
But how do Python tell “it's called as a function” and honour or not the object.__call__ rule?
This could be a matter of type, but even type has object as its base class.
Where can I learn more (and formally) about it?
By the way, is there any difference here between Python 2 and Python 3?
----- %< ----- edit ----- >% -----
Conclusions and other experiments after one answer and one comment
Update #1
After #Veedrac's answer and #chepner's comment, I came to this other test, which complete the comments from both:
class M(type):
def __call__(*args):
return "In `M.__call__`"
class A(object, metaclass=M):
def call(*args):
return "In `call`"
def __call__(*args):
return "In `A.__call__`"
print(A())
The result is:
>>> In `M.__call__`
So it seems that's the meta‑class which drives the “call” operations. If I understand correctly, the meta‑class does not matter only with class, but also with classes instances.
Update #2
Another relevant test, which shows this is not an attribute of the object which matters, but an attribute of the type of the object:
class A(object):
def __call__(*args):
return "In `A.__call__`"
def call2(*args):
return "In `call2`"
a = A()
print(a())
As expected, it prints:
>>> In `A.__call__`
Now this:
a.__call__ = call2
print(a())
It prints:
>>> In `A.__call__`
The same a before the attribute was assigned. It does not print In call2, it's still In A.__call__. That's important to note and also explain why that's the __call__ of the meta‑class which was invoked (keep in mind the meta‑class is the type of the class object). The __call__ used to call as function, is not from the object, it's from its type.
x(*args, **kwargs) is the same as type(x).__call__(x, *args, **kwargs).
So you have
>>> type(A).__call__(A)
<__main__.A object at 0x7f4d88245b50>
and it all makes sense.
chepner points out in the comments that type(A) == type. This is kind-of wierd, because type(A)(A) just gives type again! But remember that we're instead using type(A).__call__(A) which is not the same.
So this resolves to type.__call__(A). This is the constructor function for classes, which builds the data-structures and does all the construction magic.
The same is true of most dunder (double underscore) methods, such as __eq__. This is partially an optimisation in those cases.

Calling a method of a class, the classname is a variable

I came up with this solution. But this looks too complicated. There must be a better and easier way. And second : is there a way to dynamically import the class ?
class_name = "my_class_name" # located in the module : my_class_name.py
import my_class_name from my_class_name
my_class = globals()[class_name]
object = my_class()
func = getattr(my_class,"my_method")
func(object, parms) # and finally calling the method with some parms
Take a look at the __import__ built-in function. It does exactly what you expect it to do.
Edit: as promised, here's an example. Not a very good one, I just kinda got bad news and my head is elsewhere, so you'd probably write a smarter one with more practical application for your context. At least, it illustrates the point.
>>> def getMethod(module, cls, method):
... return getattr(getattr(__import__(module), cls), method)
...
>>> getMethod('sys', 'stdin', 'write')
<built-in method write of file object at 0x7fcd518fa0c0>
Edit 2: here's a smarter one.
>>> def getMethod(path):
... names = path.split('.')
... return reduce(getattr, names[1:], __import__(names[0]))
...
>>> getMethod('sys.stdin.write')
<built-in method write of file object at 0x7fdc7e0ca0c0>
Are you still around?

How do I dynamically create a function with the same signature as another function?

I'm busy creating a metaclass that replaces a stub function on a class with a new one with a proper implementation. The original function could use any signature. My problem is that I can't figure out how to create a new function with the same signature as the old one. How would I do this?
Update
This has nothing to do with the actual question which is "How do I dynamically create a function with the same signature as another function?" but I'm adding this to show why I can't use subclasses.
I'm trying to implement something like Scala Case Classes in Python. (Not the pattern matching aspect just the automatically generated properties, eq, hash and str methods.)
I want something like this:
>>> class MyCaseClass():
... __metaclass__ = CaseMetaClass
... def __init__(self, a, b):
... pass
>>> instance = MyCaseClass(1, 'x')
>>> instance.a
1
>>> instance.b
'x'
>>> str(instance)
MyCaseClass(1, 'x')
As far as I can see, there is no way to that with subclasses.
I believe functools.wraps does not reproduce the original call signature. However, Michele Simionato's decorator module does:
import decorator
class FooType(type):
def __init__(cls,name,bases,clsdict):
#decorator.decorator
def modify_stub(func, *args,**kw):
return func(*args,**kw)+' + new'
setattr(cls,'stub',modify_stub(clsdict['stub']))
class Foo(object):
__metaclass__=FooType
def stub(self,a,b,c):
return 'original'
foo=Foo()
help(foo.stub)
# Help on method stub in module __main__:
# stub(self, a, b, c) method of __main__.Foo instance
print(foo.stub(1,2,3))
# original + new
use functools.wraps
>>> from functools import wraps
>>> def f(a,b):
return a+b
>>> #wraps(f)
def f2(*args):
print(args)
return f(*args)
>>> f2(2,5)
(2, 5)
7
It is possible to do this, using inspect.getargspecs. There's even a PEP in place to make it easier.
BUT -- this is not a good thing to do. Can you imagine how much of a debugging/maintenance nightmare it would be to have your functions dynamically created at runtime -- and not only that, but done so by a metaclass?! I don't understand why you have to replace the stub dynamically; can't you just change the code when you want to change the function? I mean, suppose you have a class
class Spam( object ):
def ham( self, a, b ):
return NotImplemented
Since you don't know what it's meant to do, the metaclass can't actually implement any functionality. If you knew what ham were meant to do, you could do it in ham or one of its parent classes, instead of returning NotImplemented.

Categories