I have following structure for class.
class foo(object):
def __call__(self,param1):
pass
class bar(object):
def __call__(self,param1,param2):
pass
I have many classes of this type. And i am using this callable class as follows.
classes = [foo(), bar()]
for C in classes:
res = C(param1)
'''here i want to put condition if class takes 1 argumnet just pass 1
parameter otherwise pass two.'''
I have think of one pattern like this.
class abc():
def __init__(self):
self.param1 = 'xyz'
self.param2 = 'pqr'
def something(self, classes): # classes = [foo(), bar()]
for C in classes:
if C.__class__.__name__ in ['bar']:
res = C(self.param1, self.param2)
else:
res = C(self.param2)
but in above solution have to maintain list of class which takes two arguments and as i will add more class to file this will become messy.
I dont know whether this is correct(pythonic) way to do it.
On more idea i have in mind is to check how many argument that class is taking. If its 2 then pass an additional argument otherwise pass 1 argument.I have looked at this solution How can I find the number of arguments of a Python function? . But i am not confident enought that this is the best suited solution to my problem.
Few things about this:
There are only two type of classes in my usecase one with 1 argument and one with 2.
Both class takes first argument same so params1 in both case is same argument i am passing. in case of class with two required parameter i am passing additional argument(params2) containing some data.
Ps : Any help or new idea for this problem are appretiated.
UPD : Updated the code.
Basically, you want to use polymorphism on your object's __call__() method, but you have an issue with your callables signature not being the same.
The plain simple answer to this is: you can only use polymorphism on compatible types, which in this case means that your callables MUST have compatible signatures.
Hopefully, there's a quick and easy way to solve this: just modify your methods signatures so they accept varargs and kwargs:
class Foo(object):
def __call__(self,param1, *args, **kw):
pass
class Bar(object):
def __call__(self, param1, param2, *args, **kw):
pass
For the case where you can't change the callable's signature, there's still a workaround: use a lambda as proxy:
def func1(y, z):
pass
def func2(x):
pass
callables = [func1, lambda y, z: func2(y)]
for c in callables:
c(42, 1138)
Note that this last example is actually known as the adapter pattern
Unrelated: this:
if C.__class__.__name__ in ['bar']:
is a inefficient and convoluted way to write:
if C.__class__.__name__ == 'bar':
which is itself an inefficient, convoluted AND brittle way to write:
if type(C) is bar:
which, by itself, is a possible design smell (there are legit use cases for checking the exact type of an object, but most often this is really a design issue).
Related
I have to know if this situation is correct
We have an abstract class with one method that requires 2 parameters:
class Base(ABC):
#abstractmethod
def class_method(self, param1, param2):
pass
Then I have to implement 3 classes. Two of them use param1 and param2 in their method, but one of them only uses param1, I don´t need to use it inside!
class ClassA(Base):
def class_method(self, param1, param2):
return param1 + param2
class ClassB(Base):
def class_method(self, param1, param2):
return param1 + param2
class ClassC(Base):
def class_method(self, param1, param2):
return param1
Is this a correct implementation? What´s the best way to manage the unused param2 in the ClassC method?
I tried also to define param2 as optional
class Base(ABC):
#abstractmethod
def class_method(self, param1, param2=None):
pass
But the question is still the same.
Your first example is correct. Whether or not ClassC uses the required parameter is irrelevant, as long as it accepts an argument. Consider this example:
lst: Base = [ClassA(), ClassB(), ClassC()]
for obj in lst:
obj.class_method(x, y)
From a static typing perspective, you don't know what the actual runtime values in lst are, only that they are instances of (subclasses of) Base, and so obj.class_method must accept two arguments.
If you make an argument optional as in your second example, then a subclass cannot turn around and require it, for the same reasons. The following is correct given the static type of lst: no use of class_method should require a second argument.
lst: Base = [ClassA(), ClassB(), ClassC()]
for obj in lst:
obj.class_method(x)
Note this is in the context of conforming to the Liskov substitution principle. For all ABC cares about, the following is "correct":
class ClassD(Base):
class_method = None
You can assign anything you want to class_method, as long as it's not another abstract method, and you'll be able to instantiate ClassD without a problem. Whether that instance behaves properly doesn't matter.
I frequently have simple classes which I'll only ever want a single instance of. As a simple example:
import datetime
import sys
class PS1(object):
def __repr__(self):
now = datetime.datetime.now()
return str(now.strftime("%H:%M:%S"))
sys.ps1 = PS1()
Is there a way that I could somehow combine the definition and instantiation into a single step and achieve the same results?
As another example, just as something that is simple enough to understand.
class Example(object):
def methodOne(self, a, b):
return a + b
def methodTwo(self, a, b):
return a * b
example = Example()
I googled around and found nothing (lots of people throwing around the words one-off and anonymous but nobody seems to be talking about the same thing I am). I tried this, but it didn't work:
example = class(object):
def methodOne(self, a, b):
return a + b
def methodTwo(self, a, b):
return a * b
I realize I don't gain much, just one line I don't have to type plus one fewer things in my namespace, so I understand if this doesn't exist.
I think you don't see this often because it's really hard to read, but ...
sys.ps1 = type('PS1', (object,), {'__repr__': lambda self: datetime.datetime.now().strftime('%H:%M:%S')})()
would do the trick here...
I use type to dynmically create a class (the arguments are name, base classes, class dictionary). The class dictionary just consists of a single function __repr__ in this case.
Hopefully we can agree that the full format is much easier to grok and use ;-).
You could use a simple class decorator to replace the class with an instance of it:
def instantiator(cls):
return cls()
Then use it like this:
#instantiator
class PS1(object):
def __repr__(self):
now = datetime.datetime.now()
return str(now.strftime("%H:%M:%S"))
Then:
>>> PS1
11:53:37
If you do this, you might want to make the class name lowercase, since it will ultimately be used to name an instance, not a class.
This still requires an extra line, but not an extra name in the namespace.
If you really wanted to, you could write a metaclass that does the same thing, but automatically. However, I don't really think this would save much effort over just instantiating the class manually, and it would definitely make the code more complex and difficult to understand.
You could use a metaclass, so you can still use prettier syntax in comparison to #mgilson's answer.
class OneOff(type):
def __new__(cls, name, bases, attrs):
klass = type.__new__(cls, name, bases, attrs)
return klass()
class PS1(object):
__metaclass__ = OneOff
...
However, I'm with the others saying that I'm not sure this is a great idea. I did something like this once, but it was for a very specific usecase, and I'd really think about exploring other avenues first. Also, this looks an awful lot like a singleton/borg, so maybe that would be the better way for you to go.
(#mgilson's answer achieves what you're looking for in the most direct way. I second him on the opinion your original code is better than any of the answers here)
A simpler, more readable alternative, only if you don't need to use any special functions (e.g. __repr__), just use a dict of functions (playing the role of the methods):
fake_obj = dict(method_one = lambda a,b: a+b, method_two = lambda a,b: a*b)
There are two ways to do this in python. One is to instantiate a singleton object which can be done with a decorator, another is to make the class itself the used object with class methods and class variables.
The first option (singleton) looks like this:
def apply_class(*args, **kwargs):
def myclass(c):
c(*args,**kwargs)
return myclass
#apply_class(5)
class mysingleton(object):
def __init__(self, x):
print x
The second option (class methods/variables) looks like this:
class mysingleton:
myvariable = 5
#classmethod
def mymethod(cls):
print cls.myvariable
Ok, here is the real world scenario: I'm writing an application, and I have a class that represents a certain type of files (in my case this is photographs but that detail is irrelevant to the problem). Each instance of the Photograph class should be unique to the photo's filename.
The problem is, when a user tells my application to load a file, I need to be able to identify when files are already loaded, and use the existing instance for that filename, rather than create duplicate instances on the same filename.
To me this seems like a good situation to use memoization, and there's a lot of examples of that out there, but in this case I'm not just memoizing an ordinary function, I need to be memoizing __init__(). This poses a problem, because by the time __init__() gets called it's already too late as there's a new instance created already.
In my research I found Python's __new__() method, and I was actually able to write a working trivial example, but it fell apart when I tried to use it on my real-world objects, and I'm not sure why (the only thing I can think of is that my real world objects were subclasses of other objects that I can't really control, and so there were some incompatibilities with this approach). This is what I had:
class Flub(object):
instances = {}
def __new__(cls, flubid):
try:
self = Flub.instances[flubid]
except KeyError:
self = Flub.instances[flubid] = super(Flub, cls).__new__(cls)
print 'making a new one!'
self.flubid = flubid
print id(self)
return self
#staticmethod
def destroy_all():
for flub in Flub.instances.values():
print 'killing', flub
a = Flub('foo')
b = Flub('foo')
c = Flub('bar')
print a
print b
print c
print a is b, b is c
Flub.destroy_all()
Which output this:
making a new one!
139958663753808
139958663753808
making a new one!
139958663753872
<__main__.Flub object at 0x7f4aaa6fb050>
<__main__.Flub object at 0x7f4aaa6fb050>
<__main__.Flub object at 0x7f4aaa6fb090>
True False
killing <__main__.Flub object at 0x7f4aaa6fb050>
killing <__main__.Flub object at 0x7f4aaa6fb090>
It's perfect! Only two instances were made for the two unique id's given, and Flub.instances clearly only has two listed.
But when I tried to take this approach with the objects I was using, I got all kinds of nonsensical errors about how __init__() took only 0 arguments, not 2. So I'd change some things around and then it would tell me that __init__() needed an argument. Totally bizarre.
After a while of fighting with it, I basically just gave up and moved all the __new__() black magic into a staticmethod called get, such that I could call Photograph.get(filename) and it would only call Photograph(filename) if filename wasn't already in Photograph.instances.
Does anybody know where I went wrong here? Is there some better way to do this?
Another way of thinking about it is that it's similar to a singleton, except it's not globally singleton, just singleton-per-filename.
Here's my real-world code using the staticmethod get if you want to see it all together.
Let us see two points about your question.
Using memoize
You can use memoization, but you should decorate the class, not the __init__ method. Suppose we have this memoizator:
def get_id_tuple(f, args, kwargs, mark=object()):
"""
Some quick'n'dirty way to generate a unique key for an specific call.
"""
l = [id(f)]
for arg in args:
l.append(id(arg))
l.append(id(mark))
for k, v in kwargs:
l.append(k)
l.append(id(v))
return tuple(l)
_memoized = {}
def memoize(f):
"""
Some basic memoizer
"""
def memoized(*args, **kwargs):
key = get_id_tuple(f, args, kwargs)
if key not in _memoized:
_memoized[key] = f(*args, **kwargs)
return _memoized[key]
return memoized
Now you just need to decorate the class:
#memoize
class Test(object):
def __init__(self, somevalue):
self.somevalue = somevalue
Let us see a test?
tests = [Test(1), Test(2), Test(3), Test(2), Test(4)]
for test in tests:
print test.somevalue, id(test)
The output is below. Note that the same parameters yield the same id of the returned object:
1 3072319660
2 3072319692
3 3072319724
2 3072319692
4 3072319756
Anyway, I would prefer to create a function to generate the objects and memoize it. Seems cleaner to me, but it may be some irrelevant pet peeve:
class Test(object):
def __init__(self, somevalue):
self.somevalue = somevalue
#memoize
def get_test_from_value(somevalue):
return Test(somevalue)
Using __new__:
Or, of course, you can override __new__. Some days ago I posted an answer about the ins, outs and best practices of overriding __new__ that can be helpful. Basically, it says to always pass *args, **kwargs to your __new__ method.
I, for one, would prefer to memoize a function which creates the objects, or even write a specific function which would take care of never recreating a object to the same parameter. Of course, however, this is mostly a opinion of mine, not a rule.
The solution that I ended up using is this:
class memoize(object):
def __init__(self, cls):
self.cls = cls
self.__dict__.update(cls.__dict__)
# This bit allows staticmethods to work as you would expect.
for attr, val in cls.__dict__.items():
if type(val) is staticmethod:
self.__dict__[attr] = val.__func__
def __call__(self, *args):
key = '//'.join(map(str, args))
if key not in self.cls.instances:
self.cls.instances[key] = self.cls(*args)
return self.cls.instances[key]
And then you decorate the class with this, not __init__. Although brandizzi provided me with that key piece of information, his example decorator didn't function as desired.
I found this concept quite subtle, but basically when you're using decorators in Python, you need to understand that the thing that gets decorated (whether it's a method or a class) is actually replaced by the decorator itself. So for example when I'd try to access Photograph.instances or Camera.generate_id() (a staticmethod), I couldn't actually access them because Photograph doesn't actually refer to the original Photograph class, it refers to the memoized function (from brandizzi's example).
To get around this, I had to create a decorator class that actually took all the attributes and static methods from the decorated class and exposed them as it's own. Almost like a subclass, except that the decorator class doesn't know ahead of time what classes it will be decorating, so it has to copy the attributes over after the fact.
The end result is that any instance of the memoize class becomes an almost transparent wrapper around the actual class that it has decorated, with the exception that attempting to instantiate it (but really calling it) will provide you with cached copies when they're available.
The parameters to __new__ also get passed to __init__, so:
def __init__(self, flubid):
...
You need to accept the flubid argument there, even if you don't use it in __init__
Here is the relevant comment taken from typeobject.c in Python2.7.3
/* You may wonder why object.__new__() only complains about arguments
when object.__init__() is not overridden, and vice versa.
Consider the use cases:
1. When neither is overridden, we want to hear complaints about
excess (i.e., any) arguments, since their presence could
indicate there's a bug.
2. When defining an Immutable type, we are likely to override only
__new__(), since __init__() is called too late to initialize an
Immutable object. Since __new__() defines the signature for the
type, it would be a pain to have to override __init__() just to
stop it from complaining about excess arguments.
3. When defining a Mutable type, we are likely to override only
__init__(). So here the converse reasoning applies: we don't
want to have to override __new__() just to stop it from
complaining.
4. When __init__() is overridden, and the subclass __init__() calls
object.__init__(), the latter should complain about excess
arguments; ditto for __new__().
Use cases 2 and 3 make it unattractive to unconditionally check for
excess arguments. The best solution that addresses all four use
cases is as follows: __init__() complains about excess arguments
unless __new__() is overridden and __init__() is not overridden
(IOW, if __init__() is overridden or __new__() is not overridden);
symmetrically, __new__() complains about excess arguments unless
__init__() is overridden and __new__() is not overridden
(IOW, if __new__() is overridden or __init__() is not overridden).
However, for backwards compatibility, this breaks too much code.
Therefore, in 2.6, we'll *warn* about excess arguments when both
methods are overridden; for all other cases we'll use the above
rules.
*/
Was trying to figure this out as well and I put together a solution that combines some tips from other StackOverflow questions (links in the code comments).
If anyone still needs, try this out:
import functools
from collections import OrderedDict
def memoize(f):
class Memoized:
def __init__(self, func):
self._f = func
self._cache = {}
# Make the Memoized class masquerade as the object we are memoizing.
# Preserve class attributes
functools.update_wrapper(self, func)
# Preserve static methods
# From https://stackoverflow.com/questions/11174362
for k, v in func.__dict__.items():
self.__dict__[k] = v.__func__ if type(v) is staticmethod else v
def __call__(self, *args, **kwargs):
# Generate key
key = (args)
if kwargs:
key += (object())
for k, v in kwargs.items():
key += (hash(k))
key += (hash(v))
key = hash(key)
if key in self._cache:
return self._cache[key]
else:
self._cache[key] = self._f(*args, **kwargs)
return self._cache[key]
def __get__(self, instance, owner):
"""
From https://stackoverflow.com/questions/30104047/how-can-i-decorate-an-instance-method-with-a-decorator-class
"""
return functools.partial(self.__call__, instance)
def __instancecheck__(self, other):
"""Make isinstance() work"""
return isinstance(other, self._f)
return Memoized(f)
Then you can use like so:
#memoize
class Test:
def __init__(self, value):
self._value = value
#property
def value(self):
return self._value
Uploaded the full thing with documentation to: https://github.com/spoorn/nemoize
I'm trying to modify Guido's multimethod (dynamic dispatch code):
http://www.artima.com/weblogs/viewpost.jsp?thread=101605
to handle inheritance and possibly out of order arguments.
e.g. (inheritance problem)
class A(object):
pass
class B(A):
pass
#multimethod(A,A)
def foo(arg1,arg2):
print 'works'
foo(A(),A()) #works
foo(A(),B()) #fails
Is there a better way than iteratively checking for the super() of each item until one is found?
e.g. (argument ordering problem)
I was thinking of this from a collision detection standpoint.
e.g.
foo(Car(),Truck()) and
foo(Truck(), Car()) and
should both trigger
foo(Car,Truck) # Note: #multimethod(Truck,Car) will throw an exception if #multimethod(Car,Truck) was registered first?
I'm looking specifically for an 'elegant' solution. I know that I could just brute force my way through all the possibilities, but I'm trying to avoid that. I just wanted to get some input/ideas before sitting down and pounding out a solution.
Thanks
Regarding the inheritance issue: This can be done with a slight change to MultiMethod. (Iterating through self.typemap and checking with issubclass):
registry = {}
class MultiMethod(object):
def __init__(self, name):
self.name = name
self.typemap = {}
def __call__(self, *args):
types = tuple(arg.__class__ for arg in args) # a generator expression!
for typemap_types in self.typemap:
if all(issubclass(arg_type,known_type)
for arg_type,known_type in zip(types,typemap_types)):
function = self.typemap.get(typemap_types)
return function(*args)
raise TypeError("no match")
def register(self, types, function):
if types in self.typemap:
raise TypeError("duplicate registration")
self.typemap[types] = function
def multimethod(*types):
def register(function):
name = function.__name__
mm = registry.get(name)
if mm is None:
mm = registry[name] = MultiMethod(name)
mm.register(types, function)
return mm
return register
class A(object):
pass
class B(A):
pass
class C(object):
pass
#multimethod(A,A)
def foo(arg1,arg2):
print 'works'
foo(A(),A()) #works
foo(A(),B()) #works
foo(C(),B()) #raises TypeError
Note that self.typemap is a dict, and dicts are unordered. So if you use #multimethod to register two functions, one whose types are subclasses of the other, then the behavior of foo may be undefined. That is, the result would depend on which typemap_types comes up first in the loop for typemap_types in self.typemap.
super() returns a proxy object, not the parent class (because you can have multiple inheritance), so that wouldn't work. Using isinstance() is your best bet, although there's no way to make it as elegant as the dictionary lookups using type(arg).
I don't think allowing alternative argument orderings is a good idea; it's liable to lead to nasty surprises, and making it compatible with inheritance as well would be a significant headache. However, it would be quite simple to make a second decorator for "use this function if all the arguments are of type A", or "use this function if all the arguments are in types {A, B, E}".
Many times I have member functions that copy parameters into object's fields. For Example:
class NouveauRiches(object):
def __init__(self, car, mansion, jet, bling):
self.car = car
self.mansion = mansion
self.jet = jet
self.bling = bling
Is there a python language construct that would make the above code less tedious?
One could use *args:
def __init__(self, *args):
self.car, self.mansion, self.jet, self.bling = args
+: less tedious
-: function signature not revealing enough. need to dive into function code to know how to use function
-: does not raise a TypeError on call with wrong # of parameters (but does raise a ValueError)
Any other ideas? (Whatever your suggestion, make sure the code calling the function does stays simple)
You could do this with a helper method, something like this:
import inspect
def setargs(func):
f = inspect.currentframe(1)
argspec = inspect.getargspec(func)
for arg in argspec.args:
setattr(f.f_locals["self"], arg, f.f_locals[arg])
Usage:
class Foo(object):
def __init__(self, bar, baz=4711):
setargs(self.__init__)
print self.bar # Now defined
print self.baz # Now defined
This is not pretty, and it should probably only be used when prototyping. Please use explicit assignment if you plan to have others read it.
It could probably be improved not to need to take the function as an argument, but that would require even more ugly hacks and trickery :)
I would go for this, also you could override already defined properties.
class D:
def __init__(self, **kwargs):
self.__dict__.update(kwargs)
But i personally would just go the long way.
Think of those:
- Explicit is better than implicit.
- Flat is better than nested.
(The Zen of Python)
Try something like
d = dict(locals())
del d['self']
self.__dict__.update(d)
Of course, it returns all local variables, not just function arguments.
I am not sure this is such a good idea, but it can be done:
import inspect
class NouveauRiches(object):
def __init__(self, car, mansion, jet, bling):
arguments = inspect.getargvalues(frame)[0]
values = inspect.getargvalues(frame)[3];
for name in arguments:
self.__dict__[name] = values[name]
It does not read great either, though I suppose you could put this in a utility method that is reused.
You could try something like this:
class C(object):
def __init__(self, **kwargs):
for k in kwargs:
d = {k: kwargs[k]}
self.__dict__.update(d)
Or using setattr you can do:
class D(object):
def __init__(self, **kwargs):
for k in kwargs:
setattr(self, k, kwargs[k])
Both can then be called like:
myclass = C(test=1, test2=2)
So you have to use **kwargs, rather than *args.
I sometimes do this for classes that act "bunch-like", that is, they have a bunch of customizable attributes:
class SuperClass(object):
def __init__(self, **kw):
for name, value in kw.iteritems():
if not hasattr(self, name):
raise TypeError('Unexpected argument: %s' % name)
setattr(self, name, value)
class SubClass(SuperClass):
instance_var = None # default value
class SubClass2(SubClass):
other_instance_var = True
#property
def something_dynamic(self):
return self._internal_var
#something_dynamic.setter # new Python 2.6 feature of properties
def something_dynamic(self, value):
assert value is None or isinstance(value, str)
self._internal_var = value
Then you can call SubClass2(instance_var=[], other_instance_var=False) and it'll work without defining __init__ in either of them. You can use any property as well. Though this allows you to overwrite methods, which you probably wouldn't intend (as they return True for hasattr() just like an instance variable).
If you add any property or other other descriptor it will work fine. You can use that to do type checking; unlike type checking in __init__ it'll be applied any time that value is updated. Note you can't use any positional arguments for these unless you override __init__, so sometimes what would be a natural positional argument won't work. formencode.declarative covers this and other issues, probably with a thoroughness I would not suggest you attempt (in retrospect I don't think it's worth it).
Note that any recipe that uses self.__dict__ won't respect properties and descriptors, and if you use those together you'll just get weird and unexpected results. I only recommend using setattr() to set attributes, never self.__dict__.
Also this recipe doesn't give a very helpful signature, while some of the ones that do frame and function introspection do. With some work it is possible to dynamically generate a __doc__ that clarifies the arguments... but again I'm not sure the payoff is worth the addition of more moving parts.
I am a fan of the following
import inspect
def args_to_attrs(otherself):
frame = inspect.currentframe(1)
argvalues = inspect.getargvalues(frame)
for arg in argvalues.args:
if arg == 'self':
continue
value = argvalues.locals[arg]
setattr(otherself, arg, value)
class MyClass:
def __init__(self, arga="baf", argb="lek", argc=None):
args_to_attrs(self)
Arguments to __init__ are explicitly named, so it is clear what attributes are being set. Additionally, it is a little bit streamlined over the currently accepted answer.