Change python object functions - python

I've seen somewhere that there was a way to change some object functions in python
def decorable(cls):
cls.__lshift__ = lambda objet, fonction: fonction(objet)
return cls
I wondered if you could do things like in ruby, with the :
number.times
Can we actually change some predefined classes by applying the function above to the class int for example? If so, any ideas how I could manage to do it? And could you link me the doc of python showing every function (like lshift) that can be changed?

Ordinarily not -
as a rule, Python types defined in native code -in CPython can't be monkey patched to have new methods. Although there are means to do that with direct memory access and changing the C object structures, using CPython - that is not considered "clever", "beautiful", much less usable. (check https://github.com/clarete/forbiddenfruit)
That said, for class hierarchies you define on your own packages, that pretty much works - any magic "dunder" method that is set changes the behavior for all objects of that class, in all the process.
So, you can't do that to Python's "int" - but you can have a
class MyInt(int):
pass
a = MyInt(10)
MyInt.__rshift__ = lambda self, other: MyInt(str(self) + str(other))
print(a >> 20)
Will result in 1020 being printed.
The Python document thta tells about all the magic methods taht are used by the language is the Data Model:
https://docs.python.org/3/reference/datamodel.html

Related

Python - call multiple methods in a single call

how can i call multiple methods of an object at the same time in python. in php I can do this:
class Client{
public ac1(){ ... }
public ac2(){ ... }
}
$client = new client();
$client->ac1()->ac2(); <-- I want to do it here
how would you do it in python?
The example you gave of PHP code does not run the methods at the same time - it just combines the 2 calls in a single line of code. This pattern is called a fluent interface.
You can do the same in Python as long as a method you're calling returns the instance of the object it was called on.
I.e.:
class Client:
def ac1(self):
...
return self
def ac2(self):
...
return self
c = Client()
c.ac1().ac2()
Note that ac1() gets executed first, returns the (possibly modified in-place) instance of the object and then ac2() gets executed on that returned instance.
Some major libraries that are popular in Python are moving towards this type of interface, a good example is pandas. It has many methods that allow in-place operations, but the package is moving towards deprecating those in favour of chainable operations, returning a modified copy of the original.

Python __doc__ documentation on instances

I'd like to provide documentation (within my program) on certain dynamically created objects, but still fall back to using their class documentation. Setting __doc__ seems a suitable way to do so. However, I can't find many details in the Python help in this regard, are there any technical problems with providing documentation on an instance? For example:
class MyClass:
"""
A description of the class goes here.
"""
a = MyClass()
a.__doc__ = "A description of the object"
print( MyClass.__doc__ )
print( a.__doc__ )
__doc__ is documented as a writable attribute for functions, but not for instances of user defined classes. pydoc.help(a), for example, will only consider the __doc__ defined on the type in Python versions < 3.9.
Other protocols (including future use-cases) may reasonably bypass the special attributes defined in the instance dict, too. See Special method lookup section of the datamodel documentation, specifically:
For custom classes, implicit invocations of special methods are only guaranteed to work correctly if defined on an object’s type, not in the object’s instance dictionary.
So, depending on the consumer of the attribute, what you intend to do may not be reliable. Avoid.
A safe and simple alternative is just to use a different attribute name of your own choosing for your own use-case, preferably not using the __dunder__ syntax convention which usually indicates a special name reserved for some specific use by the implementation and/or the stdlib.
There are some pretty obvious technical problems; the question is whether or not they matter for your use case.
Here are some major uses for docstrings that your idiom will not help with:
help(a): Type help(a) in an interactive terminal, and you get the docstring for MyClass, not the docstring for a.
Auto-generated documentation: Unless you write your own documentation generator, it's not going to understand that you've done anything special with your a value. Many doc generators do have some way to specify help for module and class constants, but I'm not aware of any that will recognize your idiom.
IDE help: Many IDEs will not only auto-complete an expression, but show the relevant docstring in a tooltip. They all do this statically, and without some special-case code designed around your idiom (which they're unlikely to have, given that it's an unusual idiom), they're almost certain to fetch the docstring for the class, not the object.
Here are some where it might help:
Source readability: As a human reading your source, I can tell the intent from the a.__doc__ = … right near the construction of a. Then again, I could tell the same intent just as easily from a Sphinx comment on the constant.
Debugging: pdb doesn't really do much with docstrings, but some GUI debuggers wrapped around it do, and most of them are probably going to show a.__doc__.
Custom dynamic use of docstrings: Obviously any code that you write that does something with a.__doc__ is going to get the instance docstring if you want it to, and therefore can do whatever it wants with it. However, keep in mind that if you want to define your own "protocol", you should use your own name, not one reserved for the implementation.
Notice that most of the same is true for using a descriptor for the docstring:
>>> class C:
... #property
... def __doc__(self):
... return('C doc')
>>> c = C()
If you type c.__doc__, you'll get 'C doc', but help(c) will treat it as an object with no docstring.
It's worth noting that making help work is one of the reasons some dynamic proxy libraries generate new classes on the fly—that is, a proxy to underlying type Spam has some new type like _SpamProxy, instead of the same GenericProxy type used for proxies to Hams and Eggseses. The former allows help(myspam) to show dynamically-generated information about Spam. But I don't know how important a reason it is; often you already need dynamic classes to, e.g., make special method lookup work, at which point adding dynamic docstrings comes for free.
I think it's preferred to keep it under the class via your doc string as it will also aid any developer that works on the code. However if you are doing something dynamic that requires this setup then I don't see any reason why not. Just understand that it adds a level of indirection that makes things less clear to others.
Remember to K.I.S.S. where applicable :)
I just stumbled over this and noticed that at least with python 3.9.5 the behavior seems to have changed.
E.g. using the above example, when I call:
help(a)
I get:
Help on MyClass in module __main__:
<__main__.MyClass object>
A description of the object
Also for reference, have a look at the pydoc implementation which shows:
def _getowndoc(obj):
"""Get the documentation string for an object if it is not
inherited from its class."""
try:
doc = object.__getattribute__(obj, '__doc__')
if doc is None:
return None
if obj is not type:
typedoc = type(obj).__doc__
if isinstance(typedoc, str) and typedoc == doc:
return None
return doc
except AttributeError:
return None

Is there a way to get ad-hoc polymorphism in Python?

Many languages support ad-hoc polymorphism (a.k.a. function overloading) out of the box. However, it seems that Python opted out of it. Still, I can imagine there might be a trick or a library that is able to pull it off in Python. Does anyone know of such a tool?
For example, in Haskell one might use this to generate test data for different types:
-- In some testing library:
class Randomizable a where
genRandom :: a
-- Overload for different types
instance Randomizable String where genRandom = ...
instance Randomizable Int where genRandom = ...
instance Randomizable Bool where genRandom = ...
-- In some client project, we might have a custom type:
instance Randomizable VeryCustomType where genRandom = ...
The beauty of this is that I can extend genRandom for my own custom types without touching the testing library.
How would you achieve something like this in Python?
Python is not a strongly typed language, so it really doesn't matter if yo have an instance of Randomizable or an instance of some other class which has the same methods.
One way to get the appearance of what you want could be this:
types_ = {}
def registerType ( dtype , cls ) :
types_[dtype] = cls
def RandomizableT ( dtype ) :
return types_[dtype]
Firstly, yes, I did define a function with a capital letter, but it's meant to act more like a class. For example:
registerType ( int , TheLibrary.Randomizable )
registerType ( str , MyLibrary.MyStringRandomizable )
Then, later:
type = ... # get whatever type you want to randomize
randomizer = RandomizableT(type) ()
print randomizer.getRandom()
A Python function cannot be automatically specialised based on static compile-time typing. Therefore its result can only depend on its arguments received at run-time and on the global (or local) environment, unless the function itself is modifiable in-place and can carry some state.
Your generic function genRandom takes no arguments besides the typing information. Thus in Python it should at least receive the type as an argument. Since built-in classes cannot be modified, the generic function (instance) implementation for such classes should be somehow supplied through the global environment or included into the function itself.
I've found out that since Python 3.4, there is #functools.singledispatch decorator. However, it works only for functions which receive a type instance (object) as the first argument, so it is not clear how it could be applied in your example. I am also a bit confused by its rationale:
In addition, it is currently a common anti-pattern for Python code to inspect the types of received arguments, in order to decide what to do with the objects.
I understand that anti-pattern is a jargon term for a pattern which is considered undesirable (and does not at all mean the absence of a pattern). The rationale thus claims that inspecting types of arguments is undesirable, and this claim is used to justify introducing a tool that will simplify ... dispatching on the type of an argument. (Incidentally, note that according to PEP 20, "Explicit is better than implicit.")
The "Alternative approaches" section of PEP 443 "Single-dispatch generic functions" however seems worth reading. There are several references to possible solutions, including one to "Five-minute Multimethods in Python" article by Guido van Rossum from 2005.
Does this count for ad hock polymorphism?
class A:
def __init__(self):
pass
def aFunc(self):
print "In A"
class B:
def __init__(self):
pass
def aFunc(self):
print "In B"
f = A()
f.aFunc()
f = B()
f.aFunc()
output
In A
In B
Another version of polymorphism
from module import aName
If two modules use the same interface, you could import either one and use it in your code.
One example of this is from xml.etree.ElementTree import XMLParser

Helper function injected on all python objects?

I'm trying to mimic methods.grep from Ruby which simply returns a list of available methods for any object (class or instance) called upon, filtered by regexp pattern passed to grep.
Very handy for investigating objects in an interactive prompt.
def methods_grep(self, pattern):
""" returns list of object's method by a regexp pattern """
from re import search
return [meth_name for meth_name in dir(self) \
if search(pattern, meth_name)]
Because of Python's limitation not quite clear to me it unfortunately can't be simply inserted in the object class ancestor:
object.mgrep = classmethod(methods_grep)
# TypeError: can't set attributes of built-in/extension type 'object'
Is there some workaround how to inject all classes or do I have to stick with a global function like dir ?
There is a module called forbiddenfruit that enables you to patch built-in objects. It also allows you to reverse the changes. You can find it here https://pypi.python.org/pypi/forbiddenfruit/0.1.1
from forbiddenfruit import curse
curse(object, "methods_grep", classmethod(methods_grep))
Of course, using this in production code is likely a bad idea.
There is no workaround AFAIK. I find it quite annoying that you can't alter built-in classes. Personal opinion though.
One way would be to create a base object and force all your objects to inherit from it.
But I don't see the problem to be honest. You can simply use methods_grep(object, pattern), right? You don't have to insert it anywhere.

Python abstract module possible?

I've built a module in Python in one single file without using classes. I do this so that using some api module becomes easier. Basically like this:
the_module.py
from some_api_module import some_api_call, another_api_call
def method_one(a, b):
return some_api_call(a + b)
def method_two(c, d, e):
return another_api_call(c * d * e)
I now need to built many similar modules, for different api modules, but I want all of them to have the same basic set of methods so that I can import any of these modules and call a function knowing that this function will behave the same in all the modules I built. To ensure they are all the same, I want to use some kind of abstract base module to build upon. I would normally grab the Abstract Base Classes module, but since I don't use classes at all, this doesn't work.
Does anybody know how I can implement an abstract base module on which I can build several other modules without using classes? All tips are welcome!
You are not using classes, but you could easily rewrite your code to do so.
A class is basically a namespace which contains functions and variables, as is a module.
Should not make a huge difference whether you call mymodule.method_one() or mymodule.myclass.method_one().
In python there is no such thing as interfaces which you might know from java.
The paradigm in python is Duck typing, that means more or less that for a given module you can tell whether it implements your API if it provides the right methods.
Python does this i.e. to determine what to do if you call myobject[i] on an instance of your class myclass. It looks whether the class has the method __getitem__ and if it does so, it replaces myobject[i] by myobject.__getitem__(i).
Yout don't have to tell python that your class supports this kind of access, python just figures it out from the way you defined your class.
The same way you should determine whether your module implements your API.
Maybe you want to look inside the hidden dictionary mymodule.__dict__ after import mymodulewhich contains all function names and pointers to them of your module. You could then check whether the right functions are present and raise an error otherwise
import my_module_4
#check if my_module_4 implements api
if all(func in my_module_4.__dict__ for func in ("method_one","method_two"):
print "API implemented"
else:
print "Warning: Not all API functions found in my_module_4"

Categories