Many languages support ad-hoc polymorphism (a.k.a. function overloading) out of the box. However, it seems that Python opted out of it. Still, I can imagine there might be a trick or a library that is able to pull it off in Python. Does anyone know of such a tool?
For example, in Haskell one might use this to generate test data for different types:
-- In some testing library:
class Randomizable a where
genRandom :: a
-- Overload for different types
instance Randomizable String where genRandom = ...
instance Randomizable Int where genRandom = ...
instance Randomizable Bool where genRandom = ...
-- In some client project, we might have a custom type:
instance Randomizable VeryCustomType where genRandom = ...
The beauty of this is that I can extend genRandom for my own custom types without touching the testing library.
How would you achieve something like this in Python?
Python is not a strongly typed language, so it really doesn't matter if yo have an instance of Randomizable or an instance of some other class which has the same methods.
One way to get the appearance of what you want could be this:
types_ = {}
def registerType ( dtype , cls ) :
types_[dtype] = cls
def RandomizableT ( dtype ) :
return types_[dtype]
Firstly, yes, I did define a function with a capital letter, but it's meant to act more like a class. For example:
registerType ( int , TheLibrary.Randomizable )
registerType ( str , MyLibrary.MyStringRandomizable )
Then, later:
type = ... # get whatever type you want to randomize
randomizer = RandomizableT(type) ()
print randomizer.getRandom()
A Python function cannot be automatically specialised based on static compile-time typing. Therefore its result can only depend on its arguments received at run-time and on the global (or local) environment, unless the function itself is modifiable in-place and can carry some state.
Your generic function genRandom takes no arguments besides the typing information. Thus in Python it should at least receive the type as an argument. Since built-in classes cannot be modified, the generic function (instance) implementation for such classes should be somehow supplied through the global environment or included into the function itself.
I've found out that since Python 3.4, there is #functools.singledispatch decorator. However, it works only for functions which receive a type instance (object) as the first argument, so it is not clear how it could be applied in your example. I am also a bit confused by its rationale:
In addition, it is currently a common anti-pattern for Python code to inspect the types of received arguments, in order to decide what to do with the objects.
I understand that anti-pattern is a jargon term for a pattern which is considered undesirable (and does not at all mean the absence of a pattern). The rationale thus claims that inspecting types of arguments is undesirable, and this claim is used to justify introducing a tool that will simplify ... dispatching on the type of an argument. (Incidentally, note that according to PEP 20, "Explicit is better than implicit.")
The "Alternative approaches" section of PEP 443 "Single-dispatch generic functions" however seems worth reading. There are several references to possible solutions, including one to "Five-minute Multimethods in Python" article by Guido van Rossum from 2005.
Does this count for ad hock polymorphism?
class A:
def __init__(self):
pass
def aFunc(self):
print "In A"
class B:
def __init__(self):
pass
def aFunc(self):
print "In B"
f = A()
f.aFunc()
f = B()
f.aFunc()
output
In A
In B
Another version of polymorphism
from module import aName
If two modules use the same interface, you could import either one and use it in your code.
One example of this is from xml.etree.ElementTree import XMLParser
Related
I'm trying to use generic type hints in python, however I'm not sure if a behaviour I want is possible or not. I've done a lot of reading on the type hints documentation, but cannot find what I'm looking for.
I want to be able to specify class type arguments with default types if the type isn't defined. For instance, I would imagine the code would look something like this.
T = TypeVar('T')
S = TypeVar('S')
class Attribute(Generic[T, S=int]):
...
class FooBar:
a: Attribute[str, str]
b: Attribute[str]
So in this case type S would default to int if it is not specified in a type hint. I am also happy with a solution which uses metaprogramming to make this possible.
I think you're gonna have to wait till Python 3.12 if I'm understanding this correctly.
PEP 696 – Type defaults for TypeVarLikes | peps.python.org
The whole idea behind generic types stems from statically-typed languages (eg. Scala or Java) and it is to allow flexibility for types when defining code, and prevents unnecessary excessive overloading for functions (which is not needed in Python being a dynamically-typed language). Hence defining default or fallback types kind of deviates from the concepts of generic altogether. With that being said, I see that the Union[] is a better fit to answer your question.
If you would like to combine that with generic types (which does not make much sense given the logic behind generics), you can do something like:
from typing import Union, TypeVar
T = TypeVar('T')
def foo(var: Union[T, str]):
if var:
return "boo"
res = foo("bar")
print(res)
However, I suggest that when defining your code, try to be specific and strict when possible regarding types, and to define better generics. That can be accomplished by defining more strict generic/abstract types:
T = TypeVar('T') # Can be anything
A = TypeVar('A', str, bytes) # Must be str or bytes
The latter will allow you to remain generic,with slightly more control over your types.
I'd like to provide documentation (within my program) on certain dynamically created objects, but still fall back to using their class documentation. Setting __doc__ seems a suitable way to do so. However, I can't find many details in the Python help in this regard, are there any technical problems with providing documentation on an instance? For example:
class MyClass:
"""
A description of the class goes here.
"""
a = MyClass()
a.__doc__ = "A description of the object"
print( MyClass.__doc__ )
print( a.__doc__ )
__doc__ is documented as a writable attribute for functions, but not for instances of user defined classes. pydoc.help(a), for example, will only consider the __doc__ defined on the type in Python versions < 3.9.
Other protocols (including future use-cases) may reasonably bypass the special attributes defined in the instance dict, too. See Special method lookup section of the datamodel documentation, specifically:
For custom classes, implicit invocations of special methods are only guaranteed to work correctly if defined on an object’s type, not in the object’s instance dictionary.
So, depending on the consumer of the attribute, what you intend to do may not be reliable. Avoid.
A safe and simple alternative is just to use a different attribute name of your own choosing for your own use-case, preferably not using the __dunder__ syntax convention which usually indicates a special name reserved for some specific use by the implementation and/or the stdlib.
There are some pretty obvious technical problems; the question is whether or not they matter for your use case.
Here are some major uses for docstrings that your idiom will not help with:
help(a): Type help(a) in an interactive terminal, and you get the docstring for MyClass, not the docstring for a.
Auto-generated documentation: Unless you write your own documentation generator, it's not going to understand that you've done anything special with your a value. Many doc generators do have some way to specify help for module and class constants, but I'm not aware of any that will recognize your idiom.
IDE help: Many IDEs will not only auto-complete an expression, but show the relevant docstring in a tooltip. They all do this statically, and without some special-case code designed around your idiom (which they're unlikely to have, given that it's an unusual idiom), they're almost certain to fetch the docstring for the class, not the object.
Here are some where it might help:
Source readability: As a human reading your source, I can tell the intent from the a.__doc__ = … right near the construction of a. Then again, I could tell the same intent just as easily from a Sphinx comment on the constant.
Debugging: pdb doesn't really do much with docstrings, but some GUI debuggers wrapped around it do, and most of them are probably going to show a.__doc__.
Custom dynamic use of docstrings: Obviously any code that you write that does something with a.__doc__ is going to get the instance docstring if you want it to, and therefore can do whatever it wants with it. However, keep in mind that if you want to define your own "protocol", you should use your own name, not one reserved for the implementation.
Notice that most of the same is true for using a descriptor for the docstring:
>>> class C:
... #property
... def __doc__(self):
... return('C doc')
>>> c = C()
If you type c.__doc__, you'll get 'C doc', but help(c) will treat it as an object with no docstring.
It's worth noting that making help work is one of the reasons some dynamic proxy libraries generate new classes on the fly—that is, a proxy to underlying type Spam has some new type like _SpamProxy, instead of the same GenericProxy type used for proxies to Hams and Eggseses. The former allows help(myspam) to show dynamically-generated information about Spam. But I don't know how important a reason it is; often you already need dynamic classes to, e.g., make special method lookup work, at which point adding dynamic docstrings comes for free.
I think it's preferred to keep it under the class via your doc string as it will also aid any developer that works on the code. However if you are doing something dynamic that requires this setup then I don't see any reason why not. Just understand that it adds a level of indirection that makes things less clear to others.
Remember to K.I.S.S. where applicable :)
I just stumbled over this and noticed that at least with python 3.9.5 the behavior seems to have changed.
E.g. using the above example, when I call:
help(a)
I get:
Help on MyClass in module __main__:
<__main__.MyClass object>
A description of the object
Also for reference, have a look at the pydoc implementation which shows:
def _getowndoc(obj):
"""Get the documentation string for an object if it is not
inherited from its class."""
try:
doc = object.__getattribute__(obj, '__doc__')
if doc is None:
return None
if obj is not type:
typedoc = type(obj).__doc__
if isinstance(typedoc, str) and typedoc == doc:
return None
return doc
except AttributeError:
return None
I've seen somewhere that there was a way to change some object functions in python
def decorable(cls):
cls.__lshift__ = lambda objet, fonction: fonction(objet)
return cls
I wondered if you could do things like in ruby, with the :
number.times
Can we actually change some predefined classes by applying the function above to the class int for example? If so, any ideas how I could manage to do it? And could you link me the doc of python showing every function (like lshift) that can be changed?
Ordinarily not -
as a rule, Python types defined in native code -in CPython can't be monkey patched to have new methods. Although there are means to do that with direct memory access and changing the C object structures, using CPython - that is not considered "clever", "beautiful", much less usable. (check https://github.com/clarete/forbiddenfruit)
That said, for class hierarchies you define on your own packages, that pretty much works - any magic "dunder" method that is set changes the behavior for all objects of that class, in all the process.
So, you can't do that to Python's "int" - but you can have a
class MyInt(int):
pass
a = MyInt(10)
MyInt.__rshift__ = lambda self, other: MyInt(str(self) + str(other))
print(a >> 20)
Will result in 1020 being printed.
The Python document thta tells about all the magic methods taht are used by the language is the Data Model:
https://docs.python.org/3/reference/datamodel.html
One of the new features in python3.5 is type hinting. For example the code below is valid now:
def greeting(name: str) -> str:
return 'Hello ' + name
But, as I understand, it doesn't check anything by itself and also is interpreted absolutely the same way as this:
def greeting(name):
return 'Hello ' + name
and was implemented mostly to help static analyzers (and to make code that is easier to understand). But is there (or is planned to be implemented in the future) any way (maybe by using some third-party libraries) to make python throw errors when an argument of an invalid type is passed to a function with annotated argument types (using only type-hinting syntax)?
Type hints implement PEP 0484 which explicitly lists as a non-goal:
While the proposed typing module will contain some building blocks for
runtime type checking -- in particular the get_type_hints() function
-- third party packages would have to be developed to implement specific runtime type checking functionality, for example using
decorators or metaclasses. Using type hints for performance
optimizations is left as an exercise for the reader.
From this it seems to follow that the Python developers have no plan to add the functionality that you seek. The quote mentions decorators and that does seem the way to go. In concept it seems straightforward -- the decorator would use get_type_hints() on the function to be decorated and would iterate through the arguments, checking their types against any hints, either throwing an error if there is a clash or simply passing on the arguments to the function. This would be similar to pzelasko's answer but with the decorator using the hints to automatically handle the boiler-plate code. The simplest approach would be to simply vet the arguments, though you should also be able to make a decorator which would raise an error if the return type clashes with the hint. I don't yet have Python 3.5 and don't have the time to pursue it -- but it seems like a good learning exercise for someone who wants to learn about both decorators and type hints. Perhaps you can be one of the "third parties" the PEP alludes to.
If you use Python3.6 annotations, you can use typeguard decorators:
https://typeguard.readthedocs.io/en/latest/userguide.html#using-the-decorator
NB: This should only be a "debug" or "testing" tool, not a production one.
Thus, they advise to add the -O option to python to run without in production.
It does not check inline variable annotation checks automatically, only function parameters, function return and object types.
From the doc:
from typeguard import typechecked
#typechecked
def some_function(a: int, b: float, c: str, *args: str) -> bool:
...
return retval
#typechecked
class SomeClass:
# All type annotated methods (including static and class methods and properties)
# are type checked.
# Does not apply to inner classes!
def method(x: int) -> int:
...
You may also automate this to all the functions with:
with install_import_hook('myapp'):
from myapp import some_module
I think the simplest way is to check the type:
def greeting(name):
if not isinstance(name, str):
raise TypeError('Expected str; got %s' % type(name).__name__)
return 'Hello ' + name
Since the release of python 3.7, it has been possible to do this using functools.singledispatch.
from functools import singledispatch
#singledispatch
def greet(arg: object):
raise NotImplementedError(f"Don't know how to greet {type(arg)}")
#greet.register
def _(arg: str):
print(f"Hello, {arg}!")
#greet.register
def _(arg: int):
print(', '.join("Hello" for _ in range(arg)), "!")
greet("Bob") # string implementation is called — prints "Hello, Bob!"
greet(4) # int implementation is called — prints "Hello, Hello, Hello, Hello!"
greet(["Alice, Bob"]) # no list implementation, so falls back to the base implementation — will raise an exception
In the above example, a base implementation is registered that will raise NotImplementedError. This is the "fallback" implementation that is returned if none of the other implementations is suitable.
Following the definition of the base implementation, an arbitrary number of type-specific implementations can be registered, as shown in the example — the function behaves completely differently depending on the type of the argument that is fed to it, and the #singledispatch method uses type annotations to register a certain implementation of a function with a certain type.
A singledispatch function can have any number of arguments, but only the type annotation for the first argument is relevant to which implementation is called.
One can use the library https://github.com/h2oai/typesentry for doing this on runtime.
I'm trying to mimic methods.grep from Ruby which simply returns a list of available methods for any object (class or instance) called upon, filtered by regexp pattern passed to grep.
Very handy for investigating objects in an interactive prompt.
def methods_grep(self, pattern):
""" returns list of object's method by a regexp pattern """
from re import search
return [meth_name for meth_name in dir(self) \
if search(pattern, meth_name)]
Because of Python's limitation not quite clear to me it unfortunately can't be simply inserted in the object class ancestor:
object.mgrep = classmethod(methods_grep)
# TypeError: can't set attributes of built-in/extension type 'object'
Is there some workaround how to inject all classes or do I have to stick with a global function like dir ?
There is a module called forbiddenfruit that enables you to patch built-in objects. It also allows you to reverse the changes. You can find it here https://pypi.python.org/pypi/forbiddenfruit/0.1.1
from forbiddenfruit import curse
curse(object, "methods_grep", classmethod(methods_grep))
Of course, using this in production code is likely a bad idea.
There is no workaround AFAIK. I find it quite annoying that you can't alter built-in classes. Personal opinion though.
One way would be to create a base object and force all your objects to inherit from it.
But I don't see the problem to be honest. You can simply use methods_grep(object, pattern), right? You don't have to insert it anywhere.