I have a class which I want to have all the functions of frozenset but i don't want him to be configurable (by init, frozenset gets iterable).
Additionally, I want him to have the function 'reload' - I am loading static list from a server so the user can not change it (so I don't want the user to think he can change it).
The list on the server can be changed by the admin so I need the reload option.
That's what I hoped for:
class A(frozenset):
def __init__(self, list_id):
super().__init__()
self.list_id = list_id
self.reload()
def reload(self):
#loading staff by self.list_id...
pass
But I didn't find a way to 'add' new staff to the class (I tried to re-init it).
May be I am using the wrong staff so if you have anther way for this it fine (I need the option to compare difference between to difference objects):
a = A(1)
b = A(2)
len(a)
iter(a)
a.difference(b)
May be overloading add and update of set will be good but i don't want to do that (it looks bad in code because there are more update-like functions).
You cannot update the frozenset contents, no; it remains immutable even when subclassed.
You can subclass the collections.abc.Set() Abstract Base Class instead; it models an immutable set too; all you need to do really is implement the methods listed in the Abstract Methods column and the rest is taken care of for you:
from collections.abc import Set
class A(Set):
def __init__(self, list_id):
self.list_id = list_id
self.reload()
def reload(self):
values = get_values(self.list_id)
self._values = frozenset(values)
def __contains__(self, item):
return item in self._values
def __iter__(self):
return iter(self._values)
def __len__(self):
return len(self._values)
Not all methods of the built-in frozenset type are implemented; you can easily supply the missing ones as these are aliases of the operator methods:
def issubset(self, other):
return self <= frozenset(other)
def issuperset(self, other):
return self >= frozenset(other)
def union(self, *others):
res = self
for o in others:
res |= frozenset(o)
return res
def intersection(self, *others):
res = self
for o in others:
res &= frozenset(o)
return res
def difference(self, *others):
res = self
for o in others:
res -= frozenset(o)
return res
def symmetric_difference(self, other):
return self ^ frozenset(other)
Related
I have some classes FooA and FooB which are basically a collection of "static" methods. They operate on data - let's say it is an DataItem object:
# Base class with common behavior
class FooBase:
#classmethod
def method1(cls, arg, data: DataItem):
#res = ...
return res
#classmethod
def method2(cls, arg1, arg2, data: DataItem):
# res = ... # using method1
return res
# specialized classes
class FooA(FooBase):
# define extra methods
pass
class FooB(FooBase):
# define extra methods
pass
# usage 1: as "static methods"
res = FooA.method1(arg, data)
res2 = FooB.method2(args, data)
Now, I'd like to use these classes as attributes of a "managing" class (MyApp) which also has access to a datasource and should implicitly supply DataItems to the static methods of FooA and FooB. Moreover, the datasource supplies a list of DataItem objects.
# usage 2: as part of an "App" class
# here, the "data" argument should be supplied implicitly by MyApp
# also: MyApp contains a list of "data" objects
class MyApp:
def __init__(self, datasrc):
self.datasrc = datasrc
# this could be a generator
def get_data(self, key) -> List[DataItem]:
return self.datasrc.get_data(key)
# FooA, FooB as class / instance level attributes, descriptors, ???
# usage
my_app = MyApp("datasrc")
res_list = my_app.foo_a.method1(arg) # foo_a is a FooA obj, "data" arg is supplied automatically
# optionally, but not necessarily call as a static attribute:
res = MyApp.foo_a.method1(arg, data: DataItem) # same as FooA.method1(arg, data)
I have tried different things but found not satisfactory solution.
So... I am not sure can it be done in nice way, I thought about that and all approaches has serious drawbacks. One of the problem is we actually want to have a method that returns list or single item, depending on input parameters, which is bad.
One of way could be store datasrc in FooBase, but it violates SRP
class FooBase:
def __init__(self, datasrc):
FooBase.datasrc = datasrc
#classmethod
def method1(cls, arg, data=None):
if data is None:
return [cls.method1(arg, d) for d in cls.datasrc]
return data
Or use isinstance
#classmethod
def method1(cls, arg, data):
if isinstance(data, list):
return [cls.method1(arg, d) for d in data]
return data
But it forces us to adjust every method (which could be done with decorator or metaclass).
Another way could be use some intermediate layer:
def decorator(datasrc):
def wrapper(foo):
def f(*args, **kwargs):
# We could catch TypeError here to serve case when data is passed
return [foo(*args, **kwargs, data=data) for data in datasrc]
return f
return wrapper
class FooAdapter:
def __init__(self, datasrc, foo_cls):
self.datasrc = datasrc
methods = [
getattr(foo_cls, m)
for m in dir(foo_cls)
if callable(getattr(foo_cls, m)) and not m.startswith("__")
] # all methods of our Foo class
for method in methods:
setattr(self, method.__name__, decorator(datasrc)(method))
class MyApp:
def __init__(self, datasrc):
self.datasrc = datasrc
self.foo_a = FooAdapter(datasrc, FooA)
self.foo_b = FooAdapter(datasrc, FooB)
But solution with dynamically added functions breaks IDE support.
The cleanest solution imo could be to have Enum for Foo methods and Enum for Foo classes, then you could write code in MyApp
def get_bulk(m: MethodEnum, f: FooEnum, *args):
return [getattr(enum_to_cls_mapping[f], m)(*args, data=d) for d in self.datasrc]
Im trying to create a "function with inheritance" so that I dont have to have a bunch of similar functions with copy pasted code. And this is effectively what I ended up with:
class TypeConverter:
__metaclass__ = ABCMeta
def __convert__(self, thing):
if type(thing) == list:
return [self.convert_one(self,t) for t in thing]
else:
return self.convert_one(self,thing)
def __new__(self, thing):
return self.__convert__(self,thing)
#abstractmethod
def convert_one(self, thing):
pass
class HexToInt(TypeConverter):
def convert_one(self, _hex):
return int(_hex, 16)
class IntToHex(TypeConverter):
def convert_one(self, _int):
return hex(_int)
In py3 it runs fine and the objects work like functions which is the intent:
>>> HexToInt(['ff' , 'fe'])
[255, 254]
>>> IntToHex(255)
'0xff'
In py2 it does not work (but thats where I need it to work):
unbound method __convert__() must be called with HexToInt instance as first argument (got ABCMeta instance instead)
Ideally TypeConverter can hold all the some logic about whether or not to return a sequence and what type of sequence it should be etc, and the child classes implement implement some other specific logic. I dont want to have have regular function objects since id have to instantiate and then call on the object I want it to work exactly like a function; i.e does the thing when the parans close,like in the example above.
How bad of an idea is this? What are the problems here? Can I make it work even if I shouldn't?
Any and all thoughts highly appreciated.
Update, it works:
class TypeConverter(object):
def __convert(this, thing):
if type(thing) == list:
return [this.convert_one(t) for t in thing]
else:
return this.convert_one( thing)
def __new__(cls, thing):
c = super(TypeConverter, cls).__new__(cls)
return cls.__convert(c, thing)
#abstractmethod
def convert_one(self, thing):
pass
class HexToInt(TypeConverter):
def convert_one(self, _hex):
return int(_hex, 16)
class IntToHex(TypeConverter):
def convert_one(self, _int):
return hex(_int)
This would be a lot simpler as a simple decorator:
def one_or_many(f):
def wrapper(arg):
if isinstance(arg, list):
return list(map(f, arg))
else:
return f(arg)
return wrapper
#one_or_many
def hex_to_int(_hex):
return int(_hex, 16)
int_to_hex = one_or_many(hex)
No need for classes if you don't actually need classes.
I understand what I am asking here is probably not the best code design, but the reason for me asking is strictly academic. I am trying to understand how to make this concept work.
Typically, I will return self from a class method so that the following methods can be chained together. My understanding is by returning self, I am simply returning an instance of the class, for the following methods to work on.
But in this case, I am trying to figure out how to return both self and another value from the method. The idea is if I do not want to chain, or I do not call any class attributes, I want to retrieve the data from the method being called.
Consider this example:
class Test(object):
def __init__(self):
self.hold = None
def methoda(self):
self.hold = 'lol'
return self, 'lol'
def newmethod(self):
self.hold = self.hold * 2
return self, 2
t = Test()
t.methoda().newmethod()
print(t.hold)
In this case, I will get an AttributeError: 'tuple' object has no attribute 'newmethod' which is to be expected because the methoda method is returning a tuple which does not have any methods or attributes called newmethod.
My question is not about unpacking multiple returns, but more about how can I continue to chain methods when the preceding methods are returning multiple values. I also understand that I can control the methods return with an argument to it, but that is not what I am trying to do.
As mentioned previously, I do realize this is probably a bad question, and I am happy to delete the post if the question doesnt make any sense.
Following the suggestion by #JohnColeman, you can return a special tuple with attribute lookup delegated to your object if it is not a normal tuple attribute. That way it acts like a normal tuple except when you are chaining methods.
You can implement this as follows:
class ChainResult(tuple):
def __new__(cls, *args):
return super(ChainResult, cls).__new__(cls, args)
def __getattribute__(self, name):
try:
return getattr(super(), name)
except AttributeError:
return getattr(super().__getitem__(0), name)
class Test(object):
def __init__(self):
self.hold = None
def methoda(self):
self.hold = 'lol'
return ChainResult(self, 'lol')
def newmethod(self):
self.hold = self.hold * 2
return ChainResult(self, 2)
Testing:
>>> t = Test()
>>> t.methoda().newmethod()
>>> print(t.hold)
lollol
The returned result does indeed act as a tuple:
>>> t, res = t.methoda().newmethod()
>>> print(res)
2
>>> print(isinstance(t.methoda().newmethod(), tuple))
True
You could imagine all sorts of semantics with this, such as forwarding the returned values to the next method in the chain using closure:
class ChainResult(tuple):
def __new__(cls, *args):
return super(ChainResult, cls).__new__(cls, args)
def __getattribute__(self, name):
try:
return getattr(super(), name)
except AttributeError:
attr = getattr(super().__getitem__(0), name)
if callable(attr):
chain_results = super().__getitem__(slice(1, None))
return lambda *args, **kw: attr(*(chain_results+args), **kw)
else:
return attr
For example,
class Test:
...
def methodb(self, *args):
print(*args)
would produce
>>> t = Test()
>>> t.methoda().methodb('catz')
lol catz
It would be nice if you could make ChainResults invisible. You can almost do it by initializing the tuple base class with the normal results and saving your object in a separate attribute used only for chaining. Then use a class decorator that wraps every method with ChainResults(self, self.method(*args, **kw)). It will work okay for methods that return a tuple but a single value return will act like a length 1 tuple, so you will need something like obj.method()[0] or result, = obj.method() to work with it. I played a bit with delegating to tuple for a multiple return or to the value itself for a single return; maybe it could be made to work but it introduces so many ambiguities that I doubt it could work well.
I want to do something decidedly unpythonic. I want to create a class that allows for forward declarations of its class attributes. (If you must know, I am trying to make some sweet syntax for parser combinators.)
This is the kind of thing I am trying to make:
a = 1
class MyClass(MyBaseClass):
b = a # Refers to something outside the class
c = d + b # Here's a forward declaration to 'd'
d = 1 # Declaration resolved
My current direction is to make a metaclass so that when d is not found I catch the NameError exception and return an instance of some dummy class I'll call ForwardDeclaration. I take some inspiration from AutoEnum, which uses metaclass magic to declare enum values with bare identifiers and no assignment.
Below is what I have so far. The missing piece is: how do I continue normal name resolution and catch the NameErrors:
class MetaDict(dict):
def __init__(self):
self._forward_declarations = dict()
def __getitem__(self, key):
try:
### WHAT DO I PUT HERE ??? ###
# How do I continue name resolution to see if the
# name already exists is the scope of the class
except NameError:
if key in self._forward_declarations:
return self._forward_declarations[key]
else:
new_forward_declaration = ForwardDeclaration()
self._forward_declarations[key] = new_forward_declaration
return new_forward_declaration
class MyMeta(type):
def __prepare__(mcs, name, bases):
return MetaDict()
class MyBaseClass(metaclass=MyMeta):
pass
class ForwardDeclaration:
# Minimal behavior
def __init__(self, value=0):
self.value = value
def __add__(self, other):
return ForwardDeclaration(self.value + other)
To start with:
def __getitem__(self, key):
try:
return super().__getitem__(key)
except KeyError:
...
But that won't allow you to retrieve the global variables outside the class body.
You can also use the __missin__ method which is reserved exactly for subclasses of dict:
class MetaDict(dict):
def __init__(self):
self._forward_declarations = dict()
# Just leave __getitem__ as it is on "dict"
def __missing__(self, key):
if key in self._forward_declarations:
return self._forward_declarations[key]
else:
new_forward_declaration = ForwardDeclaration()
self._forward_declarations[key] = new_forward_declaration
return new_forward_declaration
As you can see, that is not that "UnPythonic" - advanced Python stuff such as SymPy and SQLAlchemy have to resort to this kind of behavior to do their nice magic - just be sure to get it very well documented and tested.
Now, to allow for global (module) variables, you have a to get a little out of the way - and possibly somthing that may not be avaliablein all Python implementations - that is: introspecting the frame where the class body is to get its globals:
import sys
...
class MetaDict(dict):
def __init__(self):
self._forward_declarations = dict()
# Just leave __getitem__ as it is on "dict"
def __missing__(self, key):
class_body_globals = sys._getframe().f_back.f_globals
if key in class_body_globals:
return class_body_globals[key]
if key in self._forward_declarations:
return self._forward_declarations[key]
else:
new_forward_declaration = ForwardDeclaration()
self._forward_declarations[key] = new_forward_declaration
return new_forward_declaration
Now that you are here - your special dictionaries are good enough to avoid NameErrors, but your ForwardDeclaration objects are far from smart enough - when running:
a = 1
class MyClass(MyBaseClass):
b = a # Refers to something outside the class
c = d + b # Here's a forward declaration to 'd'
d = 1
What happens is that c becomes a ForwardDeclaration object, but summed to the instant value of d which is zero. On the next line, d is simply overwritten with the value 1 and is no longer a lazy object. So you might just as well declare c = 0 + b .
To overcome this, ForwardDeclaration has to be a class designed in a smartway, so that its values are always lazily evaluated, and it behaves as in the "reactive programing" approach: i.e.: updates to a value will cascade updates into all other values that depend on it. I think giving you a full implementation of a working "reactive" aware FOrwardDeclaration class falls off the scope of this question. - I have some toy code to do that on github at https://github.com/jsbueno/python-react , though.
Even with a proper "Reactive" ForwardDeclaration class, you have to fix your dictionary again so that the d = 1 class works:
class MetaDict(dict):
def __init__(self):
self._forward_declarations = dict()
def __setitem__(self, key, value):
if key in self._forward_declarations:
self._forward_declations[key] = value
# Trigger your reactive update here if your approach is not
# automatic
return None
return super().__setitem__(key, value)
def __missing__(self, key):
# as above
And finally, there is a way to avoid havign to implement a fully reactive aware class - you can resolve all pending FOrwardDependencies on the __new__ method of the metaclass - (so that your ForwardDeclaration objects are manually "frozen" at class creation time, and no further worries - )
Something along:
from functools import reduce
sentinel = object()
class ForwardDeclaration:
# Minimal behavior
def __init__(self, value=sentinel, dependencies=None):
self.dependencies = dependencies or []
self.value = value
def __add__(self, other):
if isinstance(other, ForwardDeclaration):
return ForwardDeclaration(dependencies=self.dependencies + [self])
return ForwardDeclaration(self.value + other)
class MyMeta(type):
def __new__(metacls, name, bases, attrs):
for key, value in list(attrs.items()):
if not isinstance(value, ForwardDeclaration): continue
if any(v.value is sentinel for v in value.dependencies): continue
attrs[key] = reduce(lambda a, b: a + b.value, value.dependencies, 0)
return super().__new__(metacls, name, bases, attrs)
def __prepare__(mcs, name, bases):
return MetaDict()
And, depending on your class hierarchy and what exactly you are doing, rememebr to also update one class' dict _forward_dependencies with the _forward_dependencies created on its ancestors.
AND if you need any operator other than +, as you will have noted, you will have to keep information on the operator itself - at this point, hou might as well jsut use sympy.
Is there a dunder for this? Perhaps something along the lines of: (updated)
class Tree:
def __init__(self, item_or_tree):
self._setto(item_or_tree)
def __assign__(self, val):
self._setto(item_or_tree)
def __setitem__(self, which, to_what):
## I would like this to call __assign__ on the Tree object at _tree[which]
to_what._tree[which] = to_what
def __getitem__(self, which):
return self._tree[which]
def __len__(self): return len(self._tree)
def __eq__(self, other):
if isinstance(other, Tree):
if other._is_tree:
return (self._item == other._item) and (self._tree == other._tree)
else:
return self._item == other._item
else: return self._item == other
def _setto(self, item_or_tree):
if isinstance(item_or_tree, Tree):
self._set_from_Tree(item_or_tree)
elif isinstance(item_or_tree, dict):
self._set_from_dict(item_or_tree)
else:
self._set_from_other(item_or_type)
def _set_from_Tree(self, other_Tree):
self._tree = other_Tree[:]
self._item = other_Tree
self._is_tree = other_Tree._is_tree
def _set_from_dict(self, the_dict):
self._is_tree = True
self._item = None
self._tree = {}
for key, val in the_dict.items():
self._tree[key] = Tree(val)
def _set_from_other(self, other):
self._is_tree = False
self._tree = None
self._item = other
class TreeModel(Tree, QAbstractItemModel):
...
## a whole bunch of required overrides
## etc
...
What I'm trying to do is implement a generalized tree structure that acts as intuitively (to me) as possible and also seamlessly integrates with PyQt5's Model-View-Delegate architecture.
I want to be able to set the incoming item_or_tree to either the item or tree. So I'm looking to overload the function that's called when the = operator is used on the item.
PyQt has this item based architecture in which a QAbstractItemModel is overridden. This is (I guess) supposed to return / accept QModelIndex objects. These are trees of tables (2D arrays).
So I'm creating a single tree structure that can contain itself, deal with the 2 opposing indexing paradigms, and plays nice with Python and everything else.
It is not possible to override the implementation of x = y. See Facts and Myths about Python Names and Values for details of what assignment means.
You can override x.a = y, with __setattr__, it is (roughly) x.__setattr__('a', y).
You can override x[k] = y with __setitem__, it is (roughly) x.__setitem__(k, y).
But you can't override x = y.