Inherit from both 'heapq' and 'deque' in python? - python

I'am trying to implement a 'heapq' or a 'deque' dynamically (according to user's input)
class MyClass():
def __init__(self, choose = True ):
self.Q = []
self.add = self.genAdd(choose)
self.get = self.genGet(choose)
def genAdd(self, ch):
if(ch == True):
def f(Q, elem):
return Q.append
else:
def f(Q):
return heappush
return f
and same for 'genGet'
the execution is correct on one side (x)or the other (but not both at the same time). I get things like
TypeError: f() takes exactly 1 argument (2 given)
tried multiple inhreitance but got
TypeError: Error when calling the metaclass bases
metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases
the problem is that heapq is called with
heappush(Q, elem)
and queue with
Q.append(elem)
I hope the point is clear. I think there should be a way to fix that (maybe using lambda)
Thanks

Inheritance isn't going to help here.
First, heapq isn't even a class, so you can't inherit from it. You can write a class that wraps up its functionality (or find one on the ActiveState recipes or in a PyPI package), but you have to have a class to inherit.
But, more importantly, the whole point of inheritance is to give you an "is-a" relationship. This thing you're building isn't-a deque, or a heapq-wrapping object, it's a thing with an interface that you've defined (add and get) that happens to use either a deque or a list with heapq for implementation.
So, just do that explicitly. You're trying to define a function that either calls append on a deque, or calls heapq.heappush on a list. You're not trying to write a curried function that returns a function that does the thing, just a function that does the thing.
def genAdd(self, ch):
# As a side note, you don't need to compare == True, nor
# do you need to wrap if conditions in parens.
if ch:
def f(elem):
self.Q.append(elem)
else:
def f(elem):
heappush(self.Q, elem)
return f
There are a few other problems here. First, you definitely need to set self.Q = deque() instead of self.Q = [] if you wanted a deque. And you probably want to wrap these functions up as a types.MethodType instead of using self as a closure variable (this will work, it's just less readable, because it may not be clear to many people why it works). And so on. But this is the fundamental problem.
For example:
from collections import deque
from heapq import heappush
class MyClass(object):
def __init__(self, choose=True):
self.Q = deque() if choose else []
self.add = self.genAdd(choose)
def genAdd(self, ch):
if ch:
def f(elem):
self.Q.append(elem)
else:
def f(elem):
heappush(self.Q, elem)
return f
d = MyClass(True)
d.add(3)
d.add(2)
print(d.Q)
h = MyClass(False)
h.add(3)
h.add(2)
print(h.Q)
This will print:
deque([3, 2])
[2, 3]
That being said, there's probably a much better design: Create a class that wraps a deque in your interface. Create another class that wraps a list with heapq in your interface. Create a factory function that returns one or the other:
class _MyClassDeque(object):
def __init__(self):
self.Q = deque()
def add(self, elem):
self.Q.append(elem)
class _MyClassHeap(object):
def __init__(self):
self.Q = []
def add(self, elem):
heappush(self.Q, elem)
def MyClass(choose=True):
return _MyClassDeque() if choose else _MyClassHeap()
Now you get the same results, but the code is a lot easier to understand (and slightly more efficient, if you care…).

Related

Function overload errors [duplicate]

I know that Python does not support method overloading, but I've run into a problem that I can't seem to solve in a nice Pythonic way.
I am making a game where a character needs to shoot a variety of bullets, but how do I write different functions for creating these bullets? For example suppose I have a function that creates a bullet travelling from point A to B with a given speed. I would write a function like this:
def add_bullet(sprite, start, headto, speed):
# Code ...
But I want to write other functions for creating bullets like:
def add_bullet(sprite, start, direction, speed):
def add_bullet(sprite, start, headto, spead, acceleration):
def add_bullet(sprite, script): # For bullets that are controlled by a script
def add_bullet(sprite, curve, speed): # for bullets with curved paths
# And so on ...
And so on with many variations. Is there a better way to do it without using so many keyword arguments cause its getting kinda ugly fast. Renaming each function is pretty bad too because you get either add_bullet1, add_bullet2, or add_bullet_with_really_long_name.
To address some answers:
No I can't create a Bullet class hierarchy because thats too slow. The actual code for managing bullets is in C and my functions are wrappers around C API.
I know about the keyword arguments but checking for all sorts of combinations of parameters is getting annoying, but default arguments help allot like acceleration=0
What you are asking for is called multiple dispatch. See Julia language examples which demonstrates different types of dispatches.
However, before looking at that, we'll first tackle why overloading is not really what you want in Python.
Why Not Overloading?
First, one needs to understand the concept of overloading and why it's not applicable to Python.
When working with languages that can discriminate data types at
compile-time, selecting among the alternatives can occur at
compile-time. The act of creating such alternative functions for
compile-time selection is usually referred to as overloading a
function. (Wikipedia)
Python is a dynamically typed language, so the concept of overloading simply does not apply to it. However, all is not lost, since we can create such alternative functions at run-time:
In programming languages that defer data type identification until
run-time the selection among alternative
functions must occur at run-time, based on the dynamically determined
types of function arguments. Functions whose alternative
implementations are selected in this manner are referred to most
generally as multimethods. (Wikipedia)
So we should be able to do multimethods in Python—or, as it is alternatively called: multiple dispatch.
Multiple dispatch
The multimethods are also called multiple dispatch:
Multiple dispatch or multimethods is the feature of some
object-oriented programming languages in which a function or method
can be dynamically dispatched based on the run time (dynamic) type of
more than one of its arguments. (Wikipedia)
Python does not support this out of the box1, but, as it happens, there is an excellent Python package called multipledispatch that does exactly that.
Solution
Here is how we might use multipledispatch2 package to implement your methods:
>>> from multipledispatch import dispatch
>>> from collections import namedtuple
>>> from types import * # we can test for lambda type, e.g.:
>>> type(lambda a: 1) == LambdaType
True
>>> Sprite = namedtuple('Sprite', ['name'])
>>> Point = namedtuple('Point', ['x', 'y'])
>>> Curve = namedtuple('Curve', ['x', 'y', 'z'])
>>> Vector = namedtuple('Vector', ['x','y','z'])
>>> #dispatch(Sprite, Point, Vector, int)
... def add_bullet(sprite, start, direction, speed):
... print("Called Version 1")
...
>>> #dispatch(Sprite, Point, Point, int, float)
... def add_bullet(sprite, start, headto, speed, acceleration):
... print("Called version 2")
...
>>> #dispatch(Sprite, LambdaType)
... def add_bullet(sprite, script):
... print("Called version 3")
...
>>> #dispatch(Sprite, Curve, int)
... def add_bullet(sprite, curve, speed):
... print("Called version 4")
...
>>> sprite = Sprite('Turtle')
>>> start = Point(1,2)
>>> direction = Vector(1,1,1)
>>> speed = 100 #km/h
>>> acceleration = 5.0 #m/s**2
>>> script = lambda sprite: sprite.x * 2
>>> curve = Curve(3, 1, 4)
>>> headto = Point(100, 100) # somewhere far away
>>> add_bullet(sprite, start, direction, speed)
Called Version 1
>>> add_bullet(sprite, start, headto, speed, acceleration)
Called version 2
>>> add_bullet(sprite, script)
Called version 3
>>> add_bullet(sprite, curve, speed)
Called version 4
1. Python 3 currently supports single dispatch
2. Take care not to use multipledispatch in a multi-threaded environment or you will get weird behavior.
Python does support "method overloading" as you present it. In fact, what you just describe is trivial to implement in Python, in so many different ways, but I would go with:
class Character(object):
# your character __init__ and other methods go here
def add_bullet(self, sprite=default, start=default,
direction=default, speed=default, accel=default,
curve=default):
# do stuff with your arguments
In the above code, default is a plausible default value for those arguments, or None. You can then call the method with only the arguments you are interested in, and Python will use the default values.
You could also do something like this:
class Character(object):
# your character __init__ and other methods go here
def add_bullet(self, **kwargs):
# here you can unpack kwargs as (key, values) and
# do stuff with them, and use some global dictionary
# to provide default values and ensure that ``key``
# is a valid argument...
# do stuff with your arguments
Another alternative is to directly hook the desired function directly to the class or instance:
def some_implementation(self, arg1, arg2, arg3):
# implementation
my_class.add_bullet = some_implementation_of_add_bullet
Yet another way is to use an abstract factory pattern:
class Character(object):
def __init__(self, bfactory, *args, **kwargs):
self.bfactory = bfactory
def add_bullet(self):
sprite = self.bfactory.sprite()
speed = self.bfactory.speed()
# do stuff with your sprite and speed
class pretty_and_fast_factory(object):
def sprite(self):
return pretty_sprite
def speed(self):
return 10000000000.0
my_character = Character(pretty_and_fast_factory(), a1, a2, kw1=v1, kw2=v2)
my_character.add_bullet() # uses pretty_and_fast_factory
# now, if you have another factory called "ugly_and_slow_factory"
# you can change it at runtime in python by issuing
my_character.bfactory = ugly_and_slow_factory()
# In the last example you can see abstract factory and "method
# overloading" (as you call it) in action
You can use "roll-your-own" solution for function overloading. This one is copied from Guido van Rossum's article about multimethods (because there is little difference between multimethods and overloading in Python):
registry = {}
class MultiMethod(object):
def __init__(self, name):
self.name = name
self.typemap = {}
def __call__(self, *args):
types = tuple(arg.__class__ for arg in args) # a generator expression!
function = self.typemap.get(types)
if function is None:
raise TypeError("no match")
return function(*args)
def register(self, types, function):
if types in self.typemap:
raise TypeError("duplicate registration")
self.typemap[types] = function
def multimethod(*types):
def register(function):
name = function.__name__
mm = registry.get(name)
if mm is None:
mm = registry[name] = MultiMethod(name)
mm.register(types, function)
return mm
return register
The usage would be
from multimethods import multimethod
import unittest
# 'overload' makes more sense in this case
overload = multimethod
class Sprite(object):
pass
class Point(object):
pass
class Curve(object):
pass
#overload(Sprite, Point, Direction, int)
def add_bullet(sprite, start, direction, speed):
# ...
#overload(Sprite, Point, Point, int, int)
def add_bullet(sprite, start, headto, speed, acceleration):
# ...
#overload(Sprite, str)
def add_bullet(sprite, script):
# ...
#overload(Sprite, Curve, speed)
def add_bullet(sprite, curve, speed):
# ...
Most restrictive limitations at the moment are:
methods are not supported, only functions that are not class members;
inheritance is not handled;
kwargs are not supported;
registering new functions should be done at import time thing is not thread-safe
A possible option is to use the multipledispatch module as detailed here:
http://matthewrocklin.com/blog/work/2014/02/25/Multiple-Dispatch
Instead of doing this:
def add(self, other):
if isinstance(other, Foo):
...
elif isinstance(other, Bar):
...
else:
raise NotImplementedError()
You can do this:
from multipledispatch import dispatch
#dispatch(int, int)
def add(x, y):
return x + y
#dispatch(object, object)
def add(x, y):
return "%s + %s" % (x, y)
With the resulting usage:
>>> add(1, 2)
3
>>> add(1, 'hello')
'1 + hello'
In Python 3.4 PEP-0443. Single-dispatch generic functions was added.
Here is a short API description from PEP.
To define a generic function, decorate it with the #singledispatch decorator. Note that the dispatch happens on the type of the first argument. Create your function accordingly:
from functools import singledispatch
#singledispatch
def fun(arg, verbose=False):
if verbose:
print("Let me just say,", end=" ")
print(arg)
To add overloaded implementations to the function, use the register() attribute of the generic function. This is a decorator, taking a type parameter and decorating a function implementing the operation for that type:
#fun.register(int)
def _(arg, verbose=False):
if verbose:
print("Strength in numbers, eh?", end=" ")
print(arg)
#fun.register(list)
def _(arg, verbose=False):
if verbose:
print("Enumerate this:")
for i, elem in enumerate(arg):
print(i, elem)
The #overload decorator was added with type hints (PEP 484).
While this doesn't change the behaviour of Python, it does make it easier to understand what is going on, and for mypy to detect errors.
See: Type hints and PEP 484
This type of behaviour is typically solved (in OOP languages) using polymorphism. Each type of bullet would be responsible for knowing how it travels. For instance:
class Bullet(object):
def __init__(self):
self.curve = None
self.speed = None
self.acceleration = None
self.sprite_image = None
class RegularBullet(Bullet):
def __init__(self):
super(RegularBullet, self).__init__()
self.speed = 10
class Grenade(Bullet):
def __init__(self):
super(Grenade, self).__init__()
self.speed = 4
self.curve = 3.5
add_bullet(Grendade())
def add_bullet(bullet):
c_function(bullet.speed, bullet.curve, bullet.acceleration, bullet.sprite, bullet.x, bullet.y)
void c_function(double speed, double curve, double accel, char[] sprite, ...) {
if (speed != null && ...) regular_bullet(...)
else if (...) curved_bullet(...)
//..etc..
}
Pass as many arguments to the c_function that exist, and then do the job of determining which c function to call based on the values in the initial c function. So, Python should only ever be calling the one c function. That one c function looks at the arguments, and then can delegate to other c functions appropriately.
You're essentially just using each subclass as a different data container, but by defining all the potential arguments on the base class, the subclasses are free to ignore the ones they do nothing with.
When a new type of bullet comes along, you can simply define one more property on the base, change the one python function so that it passes the extra property, and the one c_function that examines the arguments and delegates appropriately. It doesn't sound too bad I guess.
It is impossible by definition to overload a function in python (read on for details), but you can achieve something similar with a simple decorator
class overload:
def __init__(self, f):
self.cases = {}
def args(self, *args):
def store_function(f):
self.cases[tuple(args)] = f
return self
return store_function
def __call__(self, *args):
function = self.cases[tuple(type(arg) for arg in args)]
return function(*args)
You can use it like this
#overload
def f():
pass
#f.args(int, int)
def f(x, y):
print('two integers')
#f.args(float)
def f(x):
print('one float')
f(5.5)
f(1, 2)
Modify it to adapt it to your use case.
A clarification of concepts
function dispatch: there are multiple functions with the same name. Which one should be called? two strategies
static/compile-time dispatch (aka. "overloading"). decide which function to call based on the compile-time type of the arguments. In all dynamic languages, there is no compile-time type, so overloading is impossible by definition
dynamic/run-time dispatch: decide which function to call based on the runtime type of the arguments. This is what all OOP languages do: multiple classes have the same methods, and the language decides which one to call based on the type of self/this argument. However, most languages only do it for the this argument only. The above decorator extends the idea to multiple parameters.
To clear up, assume that we define, in a hypothetical static language, the functions
void f(Integer x):
print('integer called')
void f(Float x):
print('float called')
void f(Number x):
print('number called')
Number x = new Integer('5')
f(x)
x = new Number('3.14')
f(x)
With static dispatch (overloading) you will see "number called" twice, because x has been declared as Number, and that's all overloading cares about. With dynamic dispatch you will see "integer called, float called", because those are the actual types of x at the time the function is called.
By passing keyword args.
def add_bullet(**kwargs):
#check for the arguments listed above and do the proper things
Python 3.8 added functools.singledispatchmethod
Transform a method into a single-dispatch generic function.
To define a generic method, decorate it with the #singledispatchmethod
decorator. Note that the dispatch happens on the type of the first
non-self or non-cls argument, create your function accordingly:
from functools import singledispatchmethod
class Negator:
#singledispatchmethod
def neg(self, arg):
raise NotImplementedError("Cannot negate a")
#neg.register
def _(self, arg: int):
return -arg
#neg.register
def _(self, arg: bool):
return not arg
negator = Negator()
for v in [42, True, "Overloading"]:
neg = negator.neg(v)
print(f"{v=}, {neg=}")
Output
v=42, neg=-42
v=True, neg=False
NotImplementedError: Cannot negate a
#singledispatchmethod supports nesting with other decorators such as
#classmethod. Note that to allow for dispatcher.register,
singledispatchmethod must be the outer most decorator. Here is the
Negator class with the neg methods being class bound:
from functools import singledispatchmethod
class Negator:
#singledispatchmethod
#staticmethod
def neg(arg):
raise NotImplementedError("Cannot negate a")
#neg.register
def _(arg: int) -> int:
return -arg
#neg.register
def _(arg: bool) -> bool:
return not arg
for v in [42, True, "Overloading"]:
neg = Negator.neg(v)
print(f"{v=}, {neg=}")
Output:
v=42, neg=-42
v=True, neg=False
NotImplementedError: Cannot negate a
The same pattern can be used for other similar decorators:
staticmethod, abstractmethod, and others.
I think your basic requirement is to have a C/C++-like syntax in Python with the least headache possible. Although I liked Alexander Poluektov's answer it doesn't work for classes.
The following should work for classes. It works by distinguishing by the number of non-keyword arguments (but it doesn't support distinguishing by type):
class TestOverloading(object):
def overloaded_function(self, *args, **kwargs):
# Call the function that has the same number of non-keyword arguments.
getattr(self, "_overloaded_function_impl_" + str(len(args)))(*args, **kwargs)
def _overloaded_function_impl_3(self, sprite, start, direction, **kwargs):
print "This is overload 3"
print "Sprite: %s" % str(sprite)
print "Start: %s" % str(start)
print "Direction: %s" % str(direction)
def _overloaded_function_impl_2(self, sprite, script):
print "This is overload 2"
print "Sprite: %s" % str(sprite)
print "Script: "
print script
And it can be used simply like this:
test = TestOverloading()
test.overloaded_function("I'm a Sprite", 0, "Right")
print
test.overloaded_function("I'm another Sprite", "while x == True: print 'hi'")
Output:
This is overload 3
Sprite: I'm a Sprite
Start: 0
Direction: Right
This is overload 2
Sprite: I'm another Sprite
Script:
while x == True: print 'hi'
You can achieve this with the following Python code:
#overload
def test(message: str):
return message
#overload
def test(number: int):
return number + 1
Either use multiple keyword arguments in the definition, or create a Bullet hierarchy whose instances are passed to the function.
I think a Bullet class hierarchy with the associated polymorphism is the way to go. You can effectively overload the base class constructor by using a metaclass so that calling the base class results in the creation of the appropriate subclass object. Below is some sample code to illustrate the essence of what I mean.
Updated
The code has been modified to run under both Python 2 and 3 to keep it relevant. This was done in a way that avoids the use Python's explicit metaclass syntax, which varies between the two versions.
To accomplish that objective, a BulletMetaBase instance of the BulletMeta class is created by explicitly calling the metaclass when creating the Bullet baseclass (rather than using the __metaclass__= class attribute or via a metaclass keyword argument depending on the Python version).
class BulletMeta(type):
def __new__(cls, classname, bases, classdict):
""" Create Bullet class or a subclass of it. """
classobj = type.__new__(cls, classname, bases, classdict)
if classname != 'BulletMetaBase':
if classname == 'Bullet': # Base class definition?
classobj.registry = {} # Initialize subclass registry.
else:
try:
alias = classdict['alias']
except KeyError:
raise TypeError("Bullet subclass %s has no 'alias'" %
classname)
if alias in Bullet.registry: # unique?
raise TypeError("Bullet subclass %s's alias attribute "
"%r already in use" % (classname, alias))
# Register subclass under the specified alias.
classobj.registry[alias] = classobj
return classobj
def __call__(cls, alias, *args, **kwargs):
""" Bullet subclasses instance factory.
Subclasses should only be instantiated by calls to the base
class with their subclass' alias as the first arg.
"""
if cls != Bullet:
raise TypeError("Bullet subclass %r objects should not to "
"be explicitly constructed." % cls.__name__)
elif alias not in cls.registry: # Bullet subclass?
raise NotImplementedError("Unknown Bullet subclass %r" %
str(alias))
# Create designated subclass object (call its __init__ method).
subclass = cls.registry[alias]
return type.__call__(subclass, *args, **kwargs)
class Bullet(BulletMeta('BulletMetaBase', (object,), {})):
# Presumably you'd define some abstract methods that all here
# that would be supported by all subclasses.
# These definitions could just raise NotImplementedError() or
# implement the functionality is some sub-optimal generic way.
# For example:
def fire(self, *args, **kwargs):
raise NotImplementedError(self.__class__.__name__ + ".fire() method")
# Abstract base class's __init__ should never be called.
# If subclasses need to call super class's __init__() for some
# reason then it would need to be implemented.
def __init__(self, *args, **kwargs):
raise NotImplementedError("Bullet is an abstract base class")
# Subclass definitions.
class Bullet1(Bullet):
alias = 'B1'
def __init__(self, sprite, start, direction, speed):
print('creating %s object' % self.__class__.__name__)
def fire(self, trajectory):
print('Bullet1 object fired with %s trajectory' % trajectory)
class Bullet2(Bullet):
alias = 'B2'
def __init__(self, sprite, start, headto, spead, acceleration):
print('creating %s object' % self.__class__.__name__)
class Bullet3(Bullet):
alias = 'B3'
def __init__(self, sprite, script): # script controlled bullets
print('creating %s object' % self.__class__.__name__)
class Bullet4(Bullet):
alias = 'B4'
def __init__(self, sprite, curve, speed): # for bullets with curved paths
print('creating %s object' % self.__class__.__name__)
class Sprite: pass
class Curve: pass
b1 = Bullet('B1', Sprite(), (10,20,30), 90, 600)
b2 = Bullet('B2', Sprite(), (-30,17,94), (1,-1,-1), 600, 10)
b3 = Bullet('B3', Sprite(), 'bullet42.script')
b4 = Bullet('B4', Sprite(), Curve(), 720)
b1.fire('uniform gravity')
b2.fire('uniform gravity')
Output:
creating Bullet1 object
creating Bullet2 object
creating Bullet3 object
creating Bullet4 object
Bullet1 object fired with uniform gravity trajectory
Traceback (most recent call last):
File "python-function-overloading.py", line 93, in <module>
b2.fire('uniform gravity') # NotImplementedError: Bullet2.fire() method
File "python-function-overloading.py", line 49, in fire
raise NotImplementedError(self.__class__.__name__ + ".fire() method")
NotImplementedError: Bullet2.fire() method
You can easily implement function overloading in Python. Here is an example using floats and integers:
class OverloadedFunction:
def __init__(self):
self.router = {int : self.f_int ,
float: self.f_float}
def __call__(self, x):
return self.router[type(x)](x)
def f_int(self, x):
print('Integer Function')
return x**2
def f_float(self, x):
print('Float Function (Overloaded)')
return x**3
# f is our overloaded function
f = OverloadedFunction()
print(f(3 ))
print(f(3.))
# Output:
# Integer Function
# 9
# Float Function (Overloaded)
# 27.0
The main idea behind the code is that a class holds the different (overloaded) functions that you would like to implement, and a Dictionary works as a router, directing your code towards the right function depending on the input type(x).
PS1. In case of custom classes, like Bullet1, you can initialize the internal dictionary following a similar pattern, such as self.D = {Bullet1: self.f_Bullet1, ...}. The rest of the code is the same.
PS2. The time/space complexity of the proposed solution is fairly good as well, with an average cost of O(1) per operation.
Use keyword arguments with defaults. E.g.
def add_bullet(sprite, start=default, direction=default, script=default, speed=default):
In the case of a straight bullet versus a curved bullet, I'd add two functions: add_bullet_straight and add_bullet_curved.
Overloading methods is tricky in Python. However, there could be usage of passing the dict, list or primitive variables.
I have tried something for my use cases, and this could help here to understand people to overload the methods.
Let's take your example:
A class overload method with call the methods from different class.
def add_bullet(sprite=None, start=None, headto=None, spead=None, acceleration=None):
Pass the arguments from the remote class:
add_bullet(sprite = 'test', start=Yes,headto={'lat':10.6666,'long':10.6666},accelaration=10.6}
Or
add_bullet(sprite = 'test', start=Yes, headto={'lat':10.6666,'long':10.6666},speed=['10','20,'30']}
So, handling is being achieved for list, Dictionary or primitive variables from method overloading.
Try it out for your code.
Plum supports it in a straightforward pythonic way. Copying an example from the README below.
from plum import dispatch
#dispatch
def f(x: str):
return "This is a string!"
#dispatch
def f(x: int):
return "This is an integer!"
>>> f("1")
'This is a string!'
>>> f(1)
'This is an integer!'
You can also try this code. We can try any number of arguments
# Finding the average of given number of arguments
def avg(*args): # args is the argument name we give
sum = 0
for i in args:
sum += i
average = sum/len(args) # Will find length of arguments we given
print("Avg: ", average)
# call function with different number of arguments
avg(1,2)
avg(5,6,4,7)
avg(11,23,54,111,76)

Automatically extend class with methods coming from another module

Immagine that I have defined several methods acting on an object, and I have two or more different classes that cannot inherit from the same parent, having an instance of that object. I want to automatically add all the methods to the two classes, removing the first argument (the object) and replacing it with the instance owned by the class.
Is there a way to do it?
I am sure my question is not clear, so I try to give a super simplified settings. to keep things simple the object is just a list. I hope that after the example my objective is clear! Thanks in advance for your time.
# I define some methods acting on an object (just 2 useless methods acting on a list in this example)
def get_avg(input_list):
return sum(input_list) / len(input_list)
def multiply_elements(input_list, factor):
return [i * factor for i in input_list]
Then we have 2 different classes, both have an instance of our object (the list)
class A:
list_of_apples = []
def get_list_of_apples(self):
return self.list_of_apples
class B:
"""Totally different class from A(pples), also containing a list"""
list_of_bears = []
def get_list_of_bears(self):
return self.list_of_bears
Now, to call a "list" method on the lists owned by A and B instances, I would need to do the following:
b = B()
get_avg(b.get_list_of_bears())
My goal, instead, is to automatically define some wrappers (as the following ones) which would allow me to call list methods directly from instances of A and B. Here there is an example for B:
class B:
"""Totally different class from A(pples), but containing a list"""
list_of_bears = []
def get_list_of_bears(self):
return self.list_of_bears
def get_avg(self):
return get_avg(self.list_of_bears)
def multiply_elements(self, factor):
return multiply_elements(self.list_of_bears, factor)
With the extended class, I can simply do:
b = B()
b.get_avg()
b.multiply_elements(factor=10)
I would like to automatically extend A and B.
I don't know why your classes cannot inherit from a common ancestor but one solution I can think of is to make the ancestor dynamically:
def make_ancestor():
class Temp:
def get_avg(self):
input_list = getattr(self, self.list_name)
return sum(input_list) / len(input_list)
def multiply_elements(self, factor):
input_list = getattr(self, self.list_name)
return [i * factor for i in input_list]
return Temp
class A(make_ancestor()):
list_of_apples = []
list_name = 'list_of_apples'
def get_list_of_apples(self):
return self.list_of_apples
class B(make_ancestor()):
list_of_bears = []
list_name = 'list_of_bears'
def get_list_of_bears(self):
return self.list_of_bears
Now since the parent classes are being generated dynamically your child classes don't inherit from the same parent.
As a test:
print(make_ancestor() == make_ancestor()) # False

Iterator for custom class in Python 3

I'm trying to port a custom class from Python 2 to Python 3. I can't find the right syntax to port the iterator for the class. Here is a MVCE of the real class and my attempts to solve this so far:
Working Python 2 code:
class Temp:
def __init__(self):
self.d = dict()
def __iter__(self):
return self.d.iteritems()
temp = Temp()
for thing in temp:
print(thing)
In the above code iteritems() breaks in Python 3. According to this highly voted answer, "dict.items now does the thing dict.iteritems did in python 2". So I tried that next:
class Temp:
def __init__(self):
self.d = dict()
def __iter__(self):
return self.d.items()
The above code yields "TypeError: iter() returned non-iterator of type 'dict_items'"
According to this answer, Python 3 requires iterable objects to provide a next() method in addition to the iter method. Well, a dictionary is also iterable, so in my use case I should be able to just pass dictionary's next and iter methods, right?
class Temp:
def __init__(self):
self.d = dict()
def __iter__(self):
return self.d.__iter__
def next(self):
return self.d.next
This time it's giving me "TypeError: iter() returned non-iterator of type 'method-wrapper'".
What am I missing here?
As the error message suggests, your __iter__ function does not return an iterator, which you can easily fix using the built-in iter function
class Temp:
def __init__(self):
self.d = {}
def __iter__(self):
return iter(self.d.items())
This will make your class iterable.
Alternatively, you may write a generator yourself, like so:
def __iter__(self):
for key,item in self.d.items():
yield key,item
If you want to be able to iterate over keys and items separately, i.e. in the form that the usual python3 dictionary can, you can provide additional functions, for example
class Temp:
def __init__(self, dic):
self.d = dic
def __iter__(self):
return iter(self.d)
def keys(self):
return self.d.keys()
def items(self):
return self.d.items()
def values(self):
return self.d.values()
I'm guessing from the way you phrased it that you don't actually want the next() method to be implemented if not needed. If you would, you would have to somehow turn your whole class into an iterator and somehow keep track of where you are momentarily in this iterator, because dictionaries themselves are not iterators. See also this answer.
I don't know what works in Python 2. But on Python 3 iterators can be most easily created using something called a generator. I am providing the name and the link so that you can research further.
class Temp:
def __init__(self):
self.d = {}
def __iter__(self):
for thing in self.d.items():
yield thing

Storing a reference to a reference in Python?

Using Python, is there any way to store a reference to a reference, so that I can change what that reference refers to in another context? For example, suppose I have the following class:
class Foo:
def __init__(self):
self.standalone = 3
self.lst = [4, 5, 6]
I would like to create something analogous to the following:
class Reassigner:
def __init__(self, target):
self.target = target
def reassign(self, value):
# not sure what to do here, but reassigns the reference given by target to value
Such that the following code
f = Foo()
rStandalone = Reassigner(f.standalone) # presumably this syntax might change
rIndex = Reassigner(f.lst[1])
rStandalone.reassign(7)
rIndex.reassign(9)
Would result in f.standalone equal to 7 and f.lst equal to [4, 9, 6].
Essentially, this would be an analogue to a pointer-to-pointer.
In short, it's not possible. At all. The closest equivalent is storing a reference to the object whose member/item you want to reassign, plus the attribute name/index/key, and then use setattr/setitem. However, this yields quite different syntax, and you have to differentiate between the two:
class AttributeReassigner:
def __init__(self, obj, attr):
# use your imagination
def reassign(self, val):
setattr(self.obj, self.attr, val)
class ItemReassigner:
def __init__(self, obj, key):
# use your imagination
def reassign(self, val):
self.obj[self.key] = val
f = Foo()
rStandalone = AttributeReassigner(f, 'standalone')
rIndex = ItemReassigner(f.lst, 1)
rStandalone.reassign(7)
rIndex.reassign(9)
I've actually used something very similar, but the valid use cases are few and far between.
For globals/module members, you can use either the module object or globals(), depending on whether you're inside the module or outside of it. There is no equivalent for local variables at all -- the result of locals() cannot be used to change locals reliably, it's only useful for inspecting.
I've actually used something very similar, but the valid use cases are few and far between.
Simple answer: You can't.
Complicated answer: You can use lambdas. Sort of.
class Reassigner:
def __init__(self, target):
self.reassign = target
f = Foo()
rIndex = Reassigner(lambda value: f.lst.__setitem__(1, value))
rStandalone = Reassigner(lambda value: setattr(f, 'strandalone', value))
rF = Reassigner(lambda value: locals().__setitem__('f', value)
If you need to defer assignments; you could use functools.partial (or just lambda):
from functools import partial
set_standalone = partial(setattr, f, "standalone")
set_item = partial(f.lst.__setitem__, 1)
set_standalone(7)
set_item(9)
If reassign is the only operation; you don't need a new class.
Functions are first-class citizens in Python: you can assign them to a variable, store in a list, pass as arguments, etc.
This would work for the contents of container objects. If you don't mind adding one level of indirection to your variables (which you'd need in the C pointer-to-pointer case anyway), you could:
class Container(object):
def __init__(self, val):
self.val = val
class Foo(object):
def __init__(self, target):
self.standalone = Container(3)
self.lst = [Container(4), Container(5), Container(6)]
And you wouldn't really need the reassigner object at all.
Class Reassigner(object):
def __init__(self, target):
self.target = target
def reassign(self, value):
self.target.val = value

Should a class constructor return a subclass?

Should a class constructor return a subclass?
This is mostly a question about OOP style and python style. I have problem where I need to implement a general case solution and, for performance reasons, I need to implement an optimized solution for a specific input type. The input type depends on the user. Currently I've implemented this by sub-classing the general case solution to make the optimized solution. I've come up with the following example to help describe what I mean.
from collections import Counter
class MyCounter(object):
"""General Case Counter"""
def __init__(self, seq):
self.seq = seq
def count(self, key):
return sum(key == item for item in self.seq)
class OptimizedCounter(MyCounter):
"""Counter optimized for hashable types"""
def __init__(self, seq):
self.counter = Counter(seq)
def count(self, key):
return self.counter.get(key, 0)
counter = MyCounter(['a', 'a', 'b', [], [0, 1]])
counter.count([0, 1])
# 1
counter = OptimizedCounter(['a', 'a', 'b'])
counter.count('a')
# 2
My question is how do I design a smooth interface so that the user gets an appropriate instance without having to worry about how it's implemented. I've considered doing something like the following, but that feels ugly to me. Is there a more canonical or OOP way to do something like this?
class MyCounter(object):
"""General Case Counter"""
def __new__(cls, seq):
if hasOnlyHashables(seq):
return object.__new__(OptimizedCounter)
else:
return object.__new__(MyCounter)
Use a factory function that returns an instance of the appropriate class.
def makeCounter(seq):
if hasOnlyHashables(seq):
return OptimizedCounter(seq)
else:
return MyCounter(seq)
Your allocator implementation is a little off. If you need to create an instance of a child (or different) type then you do so by calling its constructor; only if you want to create an instance of the current class should you call the parent's (object in this case) allocator.
No, the class constructor doesn't return anything.
You need to create a factory as suggested by BrenBarn, but I would put that factory as a static method in the most generic class.
Something like this:
class MyCounter(object):
#staticmethod
def from_seq(seq):
if hasOnlyHashables(seq):
return OptimizedCounter(seq)
else:
return MyCounter(seq)

Categories