How to use a __format__ method from a different class? - python

I have a class which I want to have the same __format__ method as another class in some module I have installed in my machine. What would be the correct way to "copy" it into my class, so that it works exactly the same as if I was using the module's class?
Edit: to be specific, I want to use the uncertainties package's uncertainties.UFloat.__format__ method in a class of my own.
MCVE:
class MyClass:
def __init__(self, arg):
self.v, self.u = arg
def __format__... ?
so that, like UFloat does:
>>> '{:L}'.format(uncertainties.ufloat(1, 0.1))
'1 \\pm 0.1'
expected behavior should be the same:
>>> '{:L}'.format(MyClass(1, 0.1))
'1 \\pm 0.1'

One way, as #juanpa.arrivillaga brought up, would be to simply change your method to point to the OtherClass method:
MyClass.__format__ = OtherClass.__format__
This is a pretty clumsy way of doing it, though. I would recommend using a wrapper method to accomplish the same thing, if it's a static method:
def __format__(cls, format_str):
return OtherClass.__format__(format_str)
or even convert your current object into the other class and simply call the method, if it's an instance method:
def __format__(self, format_str):
inst = OtherClass(self, format_str):
return inst.__format(format_str)
The other solution would be to find the source of OtherClass, or carefully observe the behavior, and then essentially rewrite the functionality. Normally I'd do this by looking in the source repository, but a quick pypi search of uncertainties and the associated documentation shows no signs of a git repository to draw from, so you'd have to do it the hard way. Python's inspect module could help with finding the source code of various components of the library, if that's helpful.
Looking at uncertainties in particular, as you present in your question, it looks like the ufloat type in the library uses the format function of AffineScalarFunc, which is accessible as uncertainties.UFloat. You can do this to look at the source code for uncertainties.UFloat.__format__:
>>> import inspect
>>> import uncertainties
>>> source = inspect.getsource(uncertainties.UFloat.__format__)
>>> print(source)
and you can either try to reverse-engineer/copy the algorithm or figure out how you can adapt your MyClass.__format__ to pass a value into uncertainties.UFloat.__format__ that won't crash the other class. I recommend this latter suggestion.
I'm not going to go any further with this solution because that method's code is 459 lines long and I don't feel like messing with that.

Related

Why does tab auto-completion in Python REPL and Jupyter notebook (or ipython) for a class evaluate all its descriptors/properties?

I am trying to implement a Python class to facilitate easy exploration of relatively large dataset in Jupyter notebook by exposing various (some what compute intensive) filter methods as class attributes using descriptor protocol. Idea was to take advantage of lazyness of descriptor to only compute on accessing particular attribute.
Consider the following snippet:
import time
accessed_attr = [] # I find this easier then using basic logging for jupyter/ipython
class MyProperty:
def __init__(self,name):
self.name=name
def __get__(self, instance, owner):
if instance is None:
return self
accessed_attr.append(f'accessed {self.name} from {instance} at {time.asctime()}')
setattr(instance, self.name, self.name)
return self.name # just return string
class Dummy:
abc=MyProperty('abc')
bce=MyProperty('bce')
cde=MyProperty('cde')
dummy_inst = Dummy() # instantiate the Dummy class
on dummy_inst.<tab>, I assumed Juptyer would show auto completions abc, bce, cde among other hidden methods and not evaluate them. Printing the logging list accessed_attr shows all __get__ methods for the three descriptors were called, which is not what I expect or want.
A hacky way I figured was to deffer first access to descriptor using a counter like shown in image below, but has its own issues.
I tried other ways using __slots__, modifying __dir__ to trick the kernel, but couldn't find a way to get around the issue.
I understand there is another way using __getattribute__, but it still doesn't seem elegant, I am puzzled with what seemed so trivial turned out to be mystery to me. Any hints, pointers and solutions are appreciated.
Here is my Python 3.7 based environment:
{'IPython': '7.18.1',
'jedi': '0.17.2',
'jupyter': '1.0.0',
'jupyter_core': '4.6.3',
'jupyter_client': '6.1.7'}
It's unfortunately a ca and mouse battle, IPython used to aggressively explore attribute, which ended up being deactivated because of side effects. (see for example why the IPCompleter.limit_to__all__ option was added. Though other users come to complain that dynamic attribute don't show up. So it's likely either jedi that look at those attributes. You can try using c.Completer.use_jedi=False to check that. If it's jedi, then you have to ask the jedi author, if not then I'm unsure, but it's a delicate balance.
Lazy vs exploratory is really complicated subject in IPython, you might be able to register a custom completer (even for dict keys) that might make it easier to explore without computing, or use async await for make sure only calling await obj.attr triggers the computation.

Re-referencing a large number of functions in python

I have a file functional.py which defines a number of useful functions. For each function, I want to create an alias that when called will give a reference to a function. Something like this:
foo/functional.py
def fun1(a):
return a
def fun2(a):
return a+1
...
foo/__init__.py
from inspect import getmembers, isfunction
from . import functional
for (name, fun) in getmembers(functional, isfunction):
dun = lambda f=fun: f
globals()[name] = dun
>> bar.fun1()(1)
>> 1
>> bar.fun2()(1)
>> 2
I can get the functions from functional.py using inspect and dynamically define a new set of functions that are fit for my purpose.
But why? you might ask... I am using a configuration manager Hydra where one can instantiate objects by specifying the fully qualified name. I want to make use of the functions in functional.py in the config and have hydra pass a reference to the function when creating an object that uses the function (more details can be found in the Hydra documentation).
There are many functions and I don't want to write them all out ... people have pointed out in similar questions that modifying globals() for this purpose is bad practice. My use case is fairly constrained - documentation wise there is a one-one mapping (but obviously an IDE won't be able to resolve it).
Basically, I am wondering if there is a better way to do it!
Is your question related to this feature request and in particular to this comment?
FYI: In Hydra 1.1, instantiate fully supports positional arguments so I think you should be able to call functools.partial directly without redefining it.

Python __doc__ documentation on instances

I'd like to provide documentation (within my program) on certain dynamically created objects, but still fall back to using their class documentation. Setting __doc__ seems a suitable way to do so. However, I can't find many details in the Python help in this regard, are there any technical problems with providing documentation on an instance? For example:
class MyClass:
"""
A description of the class goes here.
"""
a = MyClass()
a.__doc__ = "A description of the object"
print( MyClass.__doc__ )
print( a.__doc__ )
__doc__ is documented as a writable attribute for functions, but not for instances of user defined classes. pydoc.help(a), for example, will only consider the __doc__ defined on the type in Python versions < 3.9.
Other protocols (including future use-cases) may reasonably bypass the special attributes defined in the instance dict, too. See Special method lookup section of the datamodel documentation, specifically:
For custom classes, implicit invocations of special methods are only guaranteed to work correctly if defined on an object’s type, not in the object’s instance dictionary.
So, depending on the consumer of the attribute, what you intend to do may not be reliable. Avoid.
A safe and simple alternative is just to use a different attribute name of your own choosing for your own use-case, preferably not using the __dunder__ syntax convention which usually indicates a special name reserved for some specific use by the implementation and/or the stdlib.
There are some pretty obvious technical problems; the question is whether or not they matter for your use case.
Here are some major uses for docstrings that your idiom will not help with:
help(a): Type help(a) in an interactive terminal, and you get the docstring for MyClass, not the docstring for a.
Auto-generated documentation: Unless you write your own documentation generator, it's not going to understand that you've done anything special with your a value. Many doc generators do have some way to specify help for module and class constants, but I'm not aware of any that will recognize your idiom.
IDE help: Many IDEs will not only auto-complete an expression, but show the relevant docstring in a tooltip. They all do this statically, and without some special-case code designed around your idiom (which they're unlikely to have, given that it's an unusual idiom), they're almost certain to fetch the docstring for the class, not the object.
Here are some where it might help:
Source readability: As a human reading your source, I can tell the intent from the a.__doc__ = … right near the construction of a. Then again, I could tell the same intent just as easily from a Sphinx comment on the constant.
Debugging: pdb doesn't really do much with docstrings, but some GUI debuggers wrapped around it do, and most of them are probably going to show a.__doc__.
Custom dynamic use of docstrings: Obviously any code that you write that does something with a.__doc__ is going to get the instance docstring if you want it to, and therefore can do whatever it wants with it. However, keep in mind that if you want to define your own "protocol", you should use your own name, not one reserved for the implementation.
Notice that most of the same is true for using a descriptor for the docstring:
>>> class C:
... #property
... def __doc__(self):
... return('C doc')
>>> c = C()
If you type c.__doc__, you'll get 'C doc', but help(c) will treat it as an object with no docstring.
It's worth noting that making help work is one of the reasons some dynamic proxy libraries generate new classes on the fly—that is, a proxy to underlying type Spam has some new type like _SpamProxy, instead of the same GenericProxy type used for proxies to Hams and Eggseses. The former allows help(myspam) to show dynamically-generated information about Spam. But I don't know how important a reason it is; often you already need dynamic classes to, e.g., make special method lookup work, at which point adding dynamic docstrings comes for free.
I think it's preferred to keep it under the class via your doc string as it will also aid any developer that works on the code. However if you are doing something dynamic that requires this setup then I don't see any reason why not. Just understand that it adds a level of indirection that makes things less clear to others.
Remember to K.I.S.S. where applicable :)
I just stumbled over this and noticed that at least with python 3.9.5 the behavior seems to have changed.
E.g. using the above example, when I call:
help(a)
I get:
Help on MyClass in module __main__:
<__main__.MyClass object>
A description of the object
Also for reference, have a look at the pydoc implementation which shows:
def _getowndoc(obj):
"""Get the documentation string for an object if it is not
inherited from its class."""
try:
doc = object.__getattribute__(obj, '__doc__')
if doc is None:
return None
if obj is not type:
typedoc = type(obj).__doc__
if isinstance(typedoc, str) and typedoc == doc:
return None
return doc
except AttributeError:
return None

Helper function injected on all python objects?

I'm trying to mimic methods.grep from Ruby which simply returns a list of available methods for any object (class or instance) called upon, filtered by regexp pattern passed to grep.
Very handy for investigating objects in an interactive prompt.
def methods_grep(self, pattern):
""" returns list of object's method by a regexp pattern """
from re import search
return [meth_name for meth_name in dir(self) \
if search(pattern, meth_name)]
Because of Python's limitation not quite clear to me it unfortunately can't be simply inserted in the object class ancestor:
object.mgrep = classmethod(methods_grep)
# TypeError: can't set attributes of built-in/extension type 'object'
Is there some workaround how to inject all classes or do I have to stick with a global function like dir ?
There is a module called forbiddenfruit that enables you to patch built-in objects. It also allows you to reverse the changes. You can find it here https://pypi.python.org/pypi/forbiddenfruit/0.1.1
from forbiddenfruit import curse
curse(object, "methods_grep", classmethod(methods_grep))
Of course, using this in production code is likely a bad idea.
There is no workaround AFAIK. I find it quite annoying that you can't alter built-in classes. Personal opinion though.
One way would be to create a base object and force all your objects to inherit from it.
But I don't see the problem to be honest. You can simply use methods_grep(object, pattern), right? You don't have to insert it anywhere.

Problem using super(python 2.5.2)

I'm writing a plugin system for my program and I can't get past one thing:
class ThingLoader(object):
'''
Loader class
'''
def loadPlugins(self):
'''
Get all the plugins from plugins folder
'''
from diones.thingpad.plugin.IntrospectionHelper import loadClasses
classList=loadClasses('./plugins', IPlugin)#Gets a list of
#plugin classes
self.plugins={}#Dictionary that should be filled with
#touples of objects and theirs states, activated, deactivated.
classList[0](self)#Runs nicelly
foo = classList[1]
print foo#prints <class 'TestPlugin.TestPlugin'>
foo(self)#Raise an exception
The test plugin looks like this:
import diones.thingpad.plugin.IPlugin as plugin
class TestPlugin(plugin.IPlugin):
'''
classdocs
'''
def __init__(self, loader):
self.name='Test Plugin'
super(TestPlugin, self).__init__(loader)
Now the IPlugin looks like this:
class IPlugin(object):
'''
classdocs
'''
name=''
def __init__(self, loader):
self.loader=loader
def activate(self):
pass
All the IPlugin classes works flawlessy by them selves, but when called by ThingLoader the program gets an exception:
File "./plugins\TestPlugin.py", line 13, in __init__
super(TestPlugin, self).__init__(loader) NameError:
global name 'super' is not defined
I looked all around and I simply don't know what is going on.
‘super’ is a builtin. Unless you went out of your way to delete builtins, you shouldn't ever see “global name 'super' is not defined”.
I'm looking at your user web link where there is a dump of IntrospectionHelper. It's very hard to read without the indentation, but it looks like you may be doing exactly that:
built_in_list = ['__builtins__', '__doc__', '__file__', '__name__']
for i in built_in_list:
if i in module.__dict__:
del module.__dict__[i]
That's the original module dict you're changing there, not an informational copy you are about to return! Delete these members from a live module and you can expect much more than ‘super’ to break.
It's very hard to keep track of what that module is doing, but my reaction is there is far too much magic in it. The average Python program should never need to be messing around with the import system, sys.path, and monkey-patching __magic__ module members. A little bit of magic can be a neat trick, but this is extremely fragile. Just off the top of my head from browsing it, the code could be broken by things like:
name clashes with top-level modules
any use of new-style classes
modules supplied only as compiled bytecode
zipimporter
From the incredibly round-about functions like getClassDefinitions, extractModuleNames and isFromBase, it looks to me like you still have quite a bit to learn about the basics of how Python works. (Clues: getattr, module.__name__ and issubclass, respectively.)
In this case now is not the time to be diving into import magic! It's hard. Instead, do things The Normal Python Way. It may be a little more typing to say at the bottom of a package's mypackage/__init__.py:
from mypackage import fooplugin, barplugin, bazplugin
plugins= [fooplugin.FooPlugin, barplugin.BarPlugin, bazplugin.BazPlugin]
but it'll work and be understood everywhere without relying on a nest of complex, fragile magic.
Incidentally, unless you are planning on some in-depth multiple inheritance work (and again, now may not be the time for that), you probably don't even need to use super(). The usual “IPlugin.__init__(self, ...)” method of calling a known superclass is the straightforward thing to do; super() is not always “the newer, better way of doing things” and there are things you should understand about it before you go charging into using it.
Unless you're running a version of Python earlier than 2.2 (pretty unlikely), super() is definitely a built-in function (available in every scope, and without importing anything).
May be worth checking your version of Python (just start up the interactive prompt by typing python at the command line).

Categories