Can class instances be accessed via an index in python? - python

Consider for example that we have a class 'Agent' as below:
class Agent:
def __init__(self, number):
self.position = []
self.number = number
for i in range(number):
self.position.append([0, 0])
I can make an instance of the class by:
agent = Agent(10)
and then access the i'th agent's position by:
agent.position[i]
However, this does not seem elegant enough and to me it's a bit counter-intuitive. Instead I want to index the class instance itself. For example:
pos_i = agent[i].position
which should return the same answer as the one-line code above. Is there a way to accomplish this?

If you want to do that, you just need a class-level container, with all instances.
Since your positions, given your example, are created in an arbitrary order, I'd suggest using a dictionary.
You can just fill the class-level "position" dictionary. You could then just implement the __getitem__ method to retrieve elements from this dictionary:
class Agent:
position = {}
def __new__(cls, pos):
if pos in cls.position:
return cls.position[pos]
instance = super().__new__(cls)
cls.position[pos] = instance
return instance
def __getitem__(self, item):
return self.position[pos]
This, however, will only allow you to retrieve an instance given the position from an instance - i.e.:
agent_5 = Agent(5)
agent_10 = agent_5[10]
would work, but not:
agent_10 = Agent[10]
If you want that, you have to use a custom metaclass, and put the __getitem__ method there:
class MAgent(type):
def __getitem__(cls, item):
return cls.position[pos]
class Agent(metaclass=MAgent):
position = {}
def __new__(cls, pos):
if pos in cls.position:
return cls.position[pos]
instance = super().__new__(cls)
cls.position[pos] = instance
return instance

If you want to overload the indexing operator just overload the __getitem__ method in the class.
class Agent:
def __getitem__(self, key):
return self.position[key]
>>> myobj = MyClass()
>>> myobj[3]

Related

How to have the same updated value of a Parent class be passed down to a inner class?

I need to access the value of an attribute defined at the parent class inside an inner class, here's the code:
class main(object):
def __init__(self):
self.session_id = None
self.devices = self.Devices(self.session_id)
class Devices(object):
def __init__(self, session_id):
self.session_id = session_id
And here's how I would like to use it:
>>> m = main()
>>> m.session_id = 1
>>> m.session_id
1
>>> m.devices.session_id
>>>
My expectation is that m.devices.session_id will always have the exact same value as m.session_id. I understand that at this point when I instantiate the inner class the session_id value is passed down as None because that's how it was initiated but I'm not sure how I can keep both values the same without doing something very ugly like:
m.devices.session_id = m.session_id
outside the class code.
How can I accomplish that inside the class itself ?
The other answer works, but I think this is a better design: lose the nested class, and add a getter on the device object to lookup a backref:
class Main(object):
def __init__(self):
self.session_id = None
self.devices = Devices(main_obj=self)
class Devices(object):
def __init__(self, main_obj):
self.main_obj = main_obj
...
#property
def session_id(self):
return self.main_obj.session_id
The difference here is that you're not storing the same data twice, so they can not get out of sync - there is only one "source of truth" for the session_id (on main object).
In the earlier answer, the data is actually stored in two different namespaces and will get out of sync as easily as m.devices.session_id = 123.
You can do it like this:
class main(object):
def __init__(self):
self._session_id = None
self.devices = self.Devices(self._session_id)
#property
def session_id(self):
return self._session_id
#session_id.setter
def session_id(self, value):
self._session_id = self.devices.session_id = value
class Devices(object):
def __init__(self, session_id):
self.session_id = session_id

Using base class for all object creation

A senior dev would like me to implement Object Oriented Programming in Python where we instantiate all object creation using the Base class. It does not sit well with me because there are abstract methods in the Base class that the Derived class has to implement. His reasoning to use the Base class only as a way to instantiate our objects is so that when we iterate through a list of our objects, we can access its variables and methods the same way. Since each derived object of the base class has more attributes instantiated than the Base class, he suggests the init function to take in *args and **kwargs as part of the arguments.
Is this a good way to go about doing it? If not, can you help suggest a better alternative?
Here's a simple example of the implementation.
import abc
class Base(metaclass = abc.ABCMeta):
def __init__(self, reqarg1, reqarg2, **kwargs):
self.reqarg1 = reqarg1
self.reqarg2 = reqarg2
self.optarg1 = kwargs.get("argFromDerivedA", 0.123)
self.optarg2 = kwargs.get("argFromDerivedB", False)
self.dict = self.create_dict()
#abstractmethod
def create_dict(self):
pass
def get_subset_list(self, id):
return [item for item in self.dict.values() if item.id == id]
def __iter__(self):
for item in self.dict.values():
yield item
raise StopIteration()
class Derived_A(Base):
def __init__(self, regarg1, regarg2, optarg1):
super().__init__(regarg1, regarg2, optarg1)
def create_dict(self):
# some implementation
return dict
class Derived_B(Base):
def __init__(self, regarg1, regarg2, optarg2):
super().__init__(regarg1, regarg2, optarg2)
def create_dict(self):
# some implementation
return dict
EDIT: Just to make it clear, I don't quite know how to handle the abstractmethod in the base class properly as the senior dev would like to use it as follows:
def main():
b = Base(100, 200)
for i in get_subset_list(30):
print(i)
But dict in the Base class is not defined because it is defined in the derived classes and therefore will output the following error:
NameError: name 'abstractmethod' is not defined
My suggestion is that you use a factory class method in the Base class. You would only have to be able to determine the Derived class that you would need to return depending on the supplied input. I'll copy an implementation that assumes that you wanted a Derived_A if you supply the keyword optarg1, and Derived_B if you supply the keyword optarg2. Of course, this is completely artificial and you should change it to suit your needs.
import abc
class Base(metaclass = abc.ABCMeta):
#classmethod
def factory(cls,reqarg1,reqarg2,**kwargs):
if 'optarg1' in kwargs.keys():
return Derived_A(reqarg1=reqarg1,reqarg2=reqarg2,optarg1=kwargs['optarg1'])
elif 'optarg2' in kwargs.keys():
return Derived_B(reqarg1=reqarg1,reqarg2=reqarg2,optarg2=kwargs['optarg2'])
else:
raise ValueError('Could not determine Derived class from input')
def __init__(self, reqarg1, reqarg2, optarg1=0.123, optarg2=False):
self.reqarg1 = reqarg1
self.reqarg2 = reqarg2
self.optarg1 = optarg1
self.optarg2 = optarg2
self.dict = self.create_dict()
#abc.abstractmethod
def create_dict(self):
pass
def get_subset_list(self, id):
return [item for item in self.dict.values() if item.id == id]
def __iter__(self):
for item in self.dict.values():
yield item
class Derived_A(Base):
def __init__(self, reqarg1, reqarg2, optarg1):
super().__init__(reqarg1, reqarg2, optarg1=optarg1)
def create_dict(self):
# some implementation
dict = {'instanceOf':'Derived_A'}
return dict
class Derived_B(Base):
def __init__(self, reqarg1, reqarg2, optarg2):
super().__init__(reqarg1, reqarg2, optarg2=optarg2)
def create_dict(self):
# some implementation
dict = {'instanceOf':'Derived_B'}
return dict
This will allow you to always create a Derived_X class instance that will have the create_dict non-abstract method defined for when you __init__ it.
In [2]: b = Base.factory(100, 200)
ValueError: Could not determine Derived class from input
In [3]: b = Base.factory(100, 200, optarg1=1213.12)
In [4]: print(b.dict)
{'instanceOf': 'Derived_A'}
In [5]: b = Base.factory(100, 200, optarg2=True)
In [6]: print(b.dict)
{'instanceOf': 'Derived_B'}
Moreover, you can have more than one factory method. Look here for a short tutorial.
You don't have to use keyword arguments at all; just define the variables with their default value in the parameters section of the function, and send only the parameters you want to send from the derived classes.
Note that parameters with a default value doesn't have to be supplied - that way you can have a function with a ranging number of arguments (where the arguments are unique, and can not be treated as a list).
Here is a partial example (taken from your code):
import abc
class Base(metaclass = abc.ABCMeta):
def __init__(self, reqarg1, reqarg2, optarg1 = 0.123, optarg2 = False):
self.reqarg1, self.reqarg2 = reqarg1, reqarg2
self.optarg1, self.optarg2 = optarg1, optarg2
...
class Derived_A(Base):
def __init__(self, regarg1, regarg2, optarg1):
super().__init__(regarg1, regarg2, optarg1=optarg1)
...
class Derived_B(Base):
def __init__(self, regarg1, regarg2, optarg2):
super().__init__(regarg1, regarg2, optarg2=optarg2)
...
EDIT: As the question update, I would give just a small note - abstract method is there to make sure that a mixed list of some derived Base objects can call the same method. Base object itself can not call this method - it is abstract to the base class, and is just there so we can make sure every derived instance will have to implement this method.

Find item in set via hash

Is there a fast way to find an object of a set, if I know how the hash was calculated?
I have the following class, uid is an unique string (never used twice for different objects):
class Foo():
def __init__(self, uid):
self.uid = uid
self.__hash = hash(self.uid)
def __hash__(self):
return self.__hash
def __eq__(self, other):
return self.__hash == other.__hash
I have a set created with different uids:
foos = {Foo('a'), Foo('b'), Foo('c')}
I now wonder, if I want to have the item, which was initialized with b, if there is a faster (and if possible more pythonic) way to get the element out of the sets than
b_object = next(foo for foo in foos if foo.uid == 'b')
since I could get hash_b = hash('b'), which should provide somehow faster access, if the set is really huge (which is in my particular case obviously the case).
I'm not sure what you're using this for, but you could do something like:
uid_to_foo = {foo.uid: foo for foo in foos}
# use 'uid_to_foo[some_foo.uid]' to find an instance fast
And now you get fast access to any Foo instance through it's uid.
Note that your current hash does not promise no collisions (although they are probably not likely).
You can even have this in the class itself:
class Foo():
# add class dictionary mapping uids to foos
uid_to_foo = {}
def __init__(self, uid):
self.uid = uid
self.__hash = hash(self.uid)
# add to class-level mapping
Foo.uid_to_foo[uid] = self
def __hash__(self):
return self.__hash
def __eq__(self, other):
return self.__hash == other.__hash
To create a mapping for every sub-class you can do something like (as asked in comments), using defaultdict:
class Base():
# add class dictionary mapping uids to instances
uid_to_obj = defaultdict(dict)
def __init__(self, uid):
self.uid = uid
self.__hash = hash(self.uid)
# add specific sub-class mapping for each sub-class
Foo.uid_to_obj[type(self).__name__][uid] = self
def __hash__(self):
return self.__hash
def __eq__(self, other):
return self.__hash == other.__hash
The class-specific dictionaries are now obviously under Foo.uid_to_obj[type(self).__name__].

How to restrict object creation

Consider following example
class Key:
def __init__(self, s):
self.s = s
d = {}
for x in range(1, 10000):
t = Key(x)
d[t] = x
This will create 10000 keys. Is it possible to control the object creation of class key, for example we cannot create more than 5 objects of key class. The loop should not be changed in any ways.
You can control how, or how many objects are created by giving your class a __new__ method:
class Key(object):
_count = 0
def __new__(cls, s):
if cls._count == 5:
raise TypeError('Too many keys created')
cls._count += 1
return super(Key, cls).__new__(cls, s)
def __init__(self,s):
self.s = s
Key.__new__() is called to create a new instance; here I keep a count of how many are created, and if there are too many, an exception is raised. You could also keep a pool of instances in a dictionary, or control creating of new instance in other ways.
Note that this only works for new-style classes, inheriting from object.
You can also use a metaclass approach
import weakref
import random
class FiveElementType(type):
def __init__(self, name, bases, d):
super(FiveElementType, self).__init__(name, bases, d)
self.__cache = weakref.WeakValueDictionary()
def __call__(self, *args):
if len(self.__cache) == 5:
return random.choice(self.__cache.values())
else:
obj = super(FiveElementType, self).__call__(*args)
self.__cache[len(self.__cache)] = obj
return obj
class Key(object):
__metaclass__ = FiveElementType
def __init__(self, s):
self.s = s
You can choose a random element or select it on the base of stored index. In this approach your loop don't fail with an exception that can be right or not, depending on your intention.

Add a decorator to existing builtin class method in python

I've got a class which contains a number of lists where whenever something is added to one of the lists, I need to trigger a change to the instance's state. I've created a simple demonstration class below to try to demonstrate what I'm trying to do.
Suppose I have a class like this:
class MyClass:
added = False
def _decorator(self, f):
def func(item):
added = true
return f(item)
return func
def __init__(self):
self.list = [1, 2, 3]
self.list.append = self._decorator(self.list.append)
Since a list is built in, I cannot change it's .append method
cls = MyClass() #gives me an AttributeError since '.append' is readonly
Ideally, I could do the following:
cls = MyClass()
cls.list.append(4)
cls.added #would be true
How should I go about this? Would subclassing list allow me to change it's behavior in this way? If so, how would I pass in the class's state without changing the methods signature?
Thanks!
You cannot monkey-patch builtins, so subclassing is the only way (and actually better and cleaner IMHO). I'd go for something like this:
class CustomList(list):
def __init__(self, parent_instance, *args, **kwargs):
super(CustomList, self).__init__(*args, **kwargs)
self.parent_instance = parent_instance
def append(self, item):
self.parent_instance.added = True
super(CustomList, self).append(item)
class MyClass(object):
added = False
def __init__(self):
self.list = CustomList(self, [1,2,3])
c = MyClass()
print c.added # False
c.list.append(4)
print c.added # True
Would this suit your needs?
class MyClass(object):
added = False
def __init__(self):
self.list = [1,2,3]
def append(self, obj):
self.added = True
self.list.append(obj)
cls = MyClass()
cls.append(4)
cls.added #true
It might be helpful to know what exactly you're trying to achieve.

Categories