I want to create multiple objects and I want each object to keep track of the order it was created, so first object has id of 1, second id of 2 and so on.
class Something:
id=0
def __init__(self):
Something.id+=1
Currently I managed to keep track of the instances but not order. So
something1=Something()
something2=Something()
and if I call the id, both return 2
In this case, the reason that both of the classes return an id of 2 is because you are incrementing the class variable instead of an instance specific variable.
You can instead make use of both to get proper ids, i.e. the following:
class Something:
id=0
def __init__(self):
Something.id+=1
self.id = Something.id
something1=Something()
something2=Something()
print(something1.id, something2.id)
(this prints 1 2). The value of Something.id (the class variable) is also 2 in this case.
Basically what you need is the class to count the number of instances of itself that are created, which could be used to set the value of an instance id attribute. The counting itself can be accomplished by applying the built-in next() function to an itertools.count() iterator object.
Also, since it's possible you may want to add this capability to multiple classes, implementing the instance-counting in a metaclass would make sense since doing so would allow it to easily be reused. Using a metaclass also insures that subclasses — i.e. class SomethingElse(Something): — will each have their own separate instance counters (instead of them all sharing the one in the baseclass as it would be in most of the other answers so far).
Here's what I'm suggesting:
from itertools import count
class InstanceCounterMeta(type):
"""Metaclass to maintain an instance count."""
def __init__(cls, name, bases, attrs):
super().__init__(name, bases, attrs)
cls._instance_count = count(start=1)
class Something(metaclass=InstanceCounterMeta):
def __init__(self):
self.id = next(self._instance_count)
something1 = Something()
something2 = Something()
print(something1.id) # -> 1
print(something2.id) # -> 2
Just create an id member that isn't static:
class Something:
id = 0
def __init__(self):
Something.id += 1;
self.id = Something.id
something1 = Something()
something2 = Something()
print(something1.id) # Prints 1
print(something2.id) # Prints 2
If the goal of Something is more than keeping track of the instances you can separate this task and make it independently of it. The globals() built-in function is a dictionary which contains key-pair of "identifier-object". It is a dictionary so the insertion order is respected.
Note that it returns the state of when it is called, so if some objects are deleted they will not appear in the globals().
class A: pass
a1 = A()
a3 = A()
a2 = A()
# amount of instances
len(tuple(obj for obj in globals().values() if isinstance(obj, A)))
# 3
# object identifiers at this state of the program
(tuple(k for k, v in globals().items() if isinstance(v, A)))
# ('a1', 'a3', 'a2')
# delete an object
del a1
# new state
(tuple(k for k, v in globals().items() if isinstance(v, A)))
('a3', 'a2')
EDIT - implemented as class method
class A:
#classmethod
def instance_counter(cls):
n = len(tuple(obj for obj in globals().values() if isinstance(obj, cls)))
objs = ', '.join(tuple(k for k, v in globals().items() if isinstance(v, cls)))
print(f'"{n}" instances of {cls.__name__} are in use: "{objs}"')
a1 = A()
a3 = A()
a2 = A()
A.instance_counter()
#"3" instances of A are in use: "a1, a3, a2"
Related
How python adds attributes val1 and val2 to class. Does python internally invoke something like b1.__shared_state['val1'] = 'Jaga Gola!!!'?
# Borg (monostate pattern) lets a class have as many instances as one likes,
# but ensures that they all share the same state
class Borg:
__shared_state = {}
def __init__(self):
self.__dict__ = self.__shared_state
b1 = Borg()
b2 = Borg()
print(b1 == b2)
b1.val1 = 'Jaga Gola!!!'
b1.val2 = 'BOOOM!!!'
print(b2.val1, b1.val2)
And why if I delete _shared_state and self.__dict__ = self.__shared_state I can't add attribute to class and get error: AttributeError: 'Borg' object has no attribute 'val1'?
class Borg:
def __init__(self):
pass
b1 = Borg()
b2 = Borg()
print(b1 == b2)
b1.val1 = 'Jaga Gola!!!'
b1.val2 = 'BOOOM!!!'
print(b2.val1, b1.val2)
In this code:
class Borg:
__shared_state = {}
def __init__(self):
self.__dict__ = self.__shared_state
The __shared_state = {} line occurs at class level, so it is added once to the class Borg, not to every individual object of type Borg. It is the same as writing Borg.__shared_state = {} afterwards.
The self.__dict__ = self.__shared_state is confusing because it uses self. twice but has very different effects:
When assigning to self.something, that something is set in the object self. No surprise there.
But when reading from self.something, first something is looked for in the self object, and if it's not found there then it's looked for in the object's class. That mind sound weird but you actually use that all the time: that's how methods normally work. For example, in s = "foo"; b = s.startswith("f"), the object s doesn't have an attribute startswith, but its class str does and that's what is used when you call the method.
This line:
b1.val1 = 'Jaga Gola!!!'
Ends up translating to:
b1.__dict__['val1'] = 'Jaga Gola!!!'
But we know that b1.__dict__ is equal to Borg.__shared_state, so it's assigned to that. Then:
print(b2.val1, ...
translates to:
print(b2.__dict__['val1'])
and again we know that b2.__dict__ is equal to the same Borg.__shared_state so val1 is found.
If you remove the stuff about __shared_state at the beginning then b1 and b2 get their own __dict__ objects so putting val1 into the dict of b1 has no effect on b2, and that's how you get the error you mentioned.
This is all fine for playing around with to understand what's happening, but you should realise that this code isn't guaranteed to work and might break e.g. in a future version of Python or another implementation such as PyPy. The Python documentation for __dict__ describes it as a "read-only attribute" so you shouldn't be assigning to it at all. Don't do this in code that anybody else might run!
In fact, the idea that a.foo is just a.__dict__['foo'] is a huge simplification. For a start, we already encountered that sometimes it's followed by a.__class__.__dict__['foo'] when reading. Another example is that a.__dict__ is clearly not a.__dict__['__dict__'], otherwise how would it ever it end!? The process is somewhat complicated and documented in the Data Model docs.
The supported way to get this behaviour is to use the special __setattr__ and __getattr__ methods (also described in those Data Model docs), like this:
class Borg:
__shared_state = {}
def __getattr__(self, name):
try:
return Borg.__shared_state[name]
except KeyError:
raise AttributeError
def __setattr__(self, name, value):
Borg.__shared_state[name] = value
Its an interesting thing what you are doing there, and its based on mutability:
The initial __shared_state that you declared is created before any of your code is execute. That dictionary is known as Class Attribute, because it is linked to the class, not an instance (does not use self for declaration). This means that __shared_state is shared between b1 and b2 because it is created before them, and since it is a dict, it is mutable.
What does it mean that it is mutable?
It means that one dictionary assigned to two different instances, will reference to same memory address, and even if we change the dicttonary, the memory address will remain the same. Here is a probe:
class Example:
__shared_state = {1: 1}
def __init__(self):
self.__dict__ = self.__shared_state
print(self.__shared_state)
ex1 = Example()
ex2 = Example()
print(id(ex1.__dict__), id(ex2.__dict__))
# Prints
# {1: 1}
# {1: 1}
# 140704387518944 140704387518944
Notice how they have the same id? That's because they are refering to the same object, and since the dictionary type is mutable, altering the dictionary in one object, means that you are changing it for both, because they are the same:
# Executing this
ex1.val1 = 2
# Equals this
ex1.__dict__['val1'] = 2
# Which also equals this
Example.__shared_state['val1'] = 2
This does not happen with integers, which are immutable:
class Example:
__shared_state = 2
def __init__(self):
self.a = self.__shared_state
print(self.__shared_state)
ex1 = Example()
ex2 = Example()
ex2.a = 3
print(id(ex1.a), id(ex2.a))
# Prints
# 2
# 2
# 9302176 9302208
# Notice that once we change ex2.a, its ID changes!
When you delete your __shared_state, the moment you assign b1.val1 = 'Jaga Gola!!!' and b1.val2 = 'BOOOM!!!', it is only assigning to the dictionary from b1, thats why when you try to print b2.val1 and b2.val2 it raises an Error.
I am creating a class in Python, and I am unsure how to properly set default values. My goal is to set default values for all class instances, which can also be modified by a class method. However, I would like to have the initial default values restored after calling a method.
I have been able to make it work with the code shown below. It isn't very "pretty", so I suspect that are better approaches to this problem.
class plots:
def __init__(self, **kwargs):
self.default_attr = {'a': 1, 'b': 2, 'c': 3}
self.default_attr.update(kwargs)
self.__dict__.update((k, v) for k, v in self.default_attr.items())
def method1(self, **kwargs):
self.__dict__.update((k, v) for k, v in kwargs.items())
#### Code for this method goes here
# Then restore initial default values
self.__dict__.update((k, v) for k, v in self.default_attr.items())
When I use this class, I would do something like my_instance = plots() and my_instance.method1(), my_instance.method1(b = 5), and my_instance.method1(). When calling method1 the third time, b would be 5 if I don't reset the default values at the end of the method definition, but I would like it to be 2 again.
Note: the code above is just an example. The real class has dozens of default values, and using all of them as input arguments would be considered an antipattern.
Any suggestion on how to properly address this issue?
You can use class variables, and property to achieve your goal to set default values for all class instances. The instances values can be modified directly, and the initial default values restored after calling a method.
In view of the context that "the real class has dozens of default values", another approach that you may consider, is to set up a configuration file containing the default values, and using this file to initialize, or reset the defaults.
Here is a short example of the first approach using one class variable:
class Plots:
_a = 1
def __init__(self):
self._a = None
self.reset_default_values()
def reset_default_values(self):
self._a = Plots._a
#property
def a(self):
return self._a
#a.setter
def a(self, value):
self._a = value
plot = Plots()
print(plot.a)
plot.a = 42
print(plot.a)
plot.reset_default_values()
print(plot.a)
output:
1
42
1
There is a whole bunch of ways to solve this problem, but if you have python 3.7 installed (or have 3.6 and install the backport), dataclasses might be a good fit for a nice solution.
First of all, it lets you define the default values in a readable and compact manner, and also allows all the mutation operations you need:
>>> from dataclasses import dataclass
>>> #dataclass
... class Plots:
... a: int = 1
... b: int = 2
... c: int = 3
...
>>> p = Plots() # create a Plot with only default values
>>> p
Plots(a=1, b=2, c=3)
>>> p.a = -1 # update something in this Plot instance
>>> p
Plots(a=-1, b=2, c=3)
You also get the option to define default factories instead of default values for free with the dataclass field definition. It might not be a problem yet, but it avoids the mutable default value gotcha, which every python programmer runs into sooner or later.
Last but not least, writing a reset function is quite easy given an existing dataclass, because it keeps track of all the default values already in its __dataclass_fields__ attribute:
>>> from dataclasses import dataclass, MISSING
>>> #dataclass
... class Plots:
... a: int = 1
... b: int = 2
... c: int = 3
...
... def reset(self):
... for name, field in self.__dataclass_fields__.items():
... if field.default != MISSING:
... setattr(self, name, field.default)
... else:
... setattr(self, name, field.default_factory())
...
>>> p = Plots(a=-1) # create a Plot with some non-default values
>>> p
Plots(a=-1, b=2, c=3)
>>> p.reset() # calling reset on it restores the pre-defined defaults
>>> p
Plots(a=1, b=2, c=3)
So now you can write some function do_stuff(...) that updates the fields in a Plot instance, and as long as you execute reset() the changes won't persist.
You can use a context manager or a decorator to apply and reset the values without having to type the same code on each method.
Rather than having self.default_attr, I'd just return to the previous state.
Using a decorator you could get:
def with_kwargs(fn):
def inner(self, **kwargs):
prev = self.__dict__.copy()
try:
self.__dict__.update(kwargs)
ret = fn(self)
finally:
self.__dict__ = prev
return ret
return inner
class plots:
a = 1
b = 2
c = 3
def __init__(self, **kwargs):
self.__dict__.update(kwargs)
#with_kwargs
def method1(self):
# Code goes here
IMHO this is a bad idea, and would at least suggest not mutating plots. You can do this by making a new object and passing that to method1 as self.
class Transparent:
pass
def with_kwargs(fn):
def inner(self, **kwargs):
new_self = Transparent()
new_self.__dict__ = {**self.__dict__, **kwargs}
return fn(new_self)
return inner
If I create a class in Python and I give it a class attribute (this is taken directly from the docs, here), as
class Dog:
tricks = []
def __init__(self, name):
self.name = name
def add_trick(self, trick):
self.tricks.append(trick)
I see that, as the docs suggest, when doing
d1 = Dog('d1')
d2 = Dog('d2')
d1.add_trick('trick')
print d2.tricks
I get that the trick is added to d2 as well:
['trick']
This is because tricks is a class attribute rather than an instance attribute so gets shared across all instances (correct me if this is not orthodox!).
Now, suppose I do this instead
class Dog:
a = 1
def __init__(self, name):
self.name = name
def improve_a(self):
self.a += 1
and I run
d1 = Dog('d1')
d2 = Dog('d2')
d1.improve_a()
print d1.a, d2.a
this gives me 2 and 1 respectively, namely the count for the second instance has not changed. Why is this, why the behaviour difference?
The int class does not define the += operator (__iadd__ method). That wouldn't make sense because it is immutable.
That's why += defaults to + and then =. reference
self.a += 1 becomes self.a = self.a + 1
Now the first time you call improve_a the following happens:
read class attribute a and put it on the stack
add 1 to the item on the stack
create a new instance attribute a and assign it the value on the stack
That means the class attribute is not changed at all and you add a new instance attribute.
On every subsequent call of improve on the same object the instance attribute is incremented, because attribute lookup starts on the instance dict and will only go to the class dict if that attribute does not exist.
If you do the same with a mutable class which overloads the __iadd__ method you can get different behaviour:
class HasList:
some_list = []
def add_something(self, value):
some_list += [value]
fst = HasList()
sec = HasList()
fst.add_something(1)
fst.add_something(2)
sec.add_something(3)
print(HasList.some_list, fst.some_list, sec.some_list)
You will see that all instances and the class itself still hold the same list. The print shows the same list [1, 2, 3] each time. You can also check that all three lists are identical: fst.some_list is sec.some_list and fst.some_list is HasList.some_list # -> True.
That is because list.__iadd__ just calls list.extend and returns itself inside the method (at least if it was written in python).
bit of an odd question here. If I have two separate objects, each with their own variables and functions, is there any way those two objects can be combined into one single object?
To be more specific: I have an object with 15 variables in it and then I have my self object. I want to load those variables into self. Is there any easy way to do this or do I have to do it manually?
Use the __dict__ property: self.__dict__.update(other.__dict__)
There are corner cases where this won't work, notably for any variables defined in the class, rather than in a method (or in other "running code").
If you want to copy pretty much everything over:
for k in filter(lambda k: not k.startswith('_'), dir(other)): # avoid copying private items
setattr(self, k, getattr(other, k))
vars(obj) returns obj.__dict__
so
vars(self).update(vars(obj)) works too
You can create an object which works like a proxy - just call methods and variables of objects. In python you can use __getattr__() for that:
class A:
def __init__(self):
self.a1 = 1
self.a2 = 2
def a(self):
return "a"
class B:
def __init__(self):
self.b1 = 1
self.b2 = 2
def b(self):
return "b"
class Combine:
def __init__(self, *args):
self.__objects = args
def __getattr__(self, name):
for obj in self.__objects:
try:
return getattr(obj, name)
except AttributeError:
pass
raise AttributeError
obj = Combine(A(), B())
print obj.a1, obj.a2, obj.a()
print obj.b1, obj.b2, obj.b()
the quick-but-ugly (and unsafe) way of copying members from another object at once:
self.__dict__.update(otherobj.__dict__)
this will not copy methods and static (class) members however.
for k,v in other.__dict__.items():
# you might want to put other conditions here to check which attrs you want to copy and which you don't
if k not in self.__dict__.keys():
self.__dict__[k]=v
This is an unusual question, but I'd like to dynamically generate the __slots__ attribute of the class based on whatever attributes I happened to have added to the class.
For example, if I have a class:
class A(object):
one = 1
two = 2
__slots__ = ['one', 'two']
I'd like to do this dynamically rather than specifying the arguments by hand, how would I do this?
At the point you're trying to define slots, the class hasn't been built yet, so you cannot define it dynamically from within the A class.
To get the behaviour you want, use a metaclass to introspect the definition of A and add a slots attribute.
class MakeSlots(type):
def __new__(cls, name, bases, attrs):
attrs['__slots__'] = attrs.keys()
return super(MakeSlots, cls).__new__(cls, name, bases, attrs)
class A(object):
one = 1
two = 2
__metaclass__ = MakeSlots
One very important thing to be aware of -- if those attributes stay in the class, the __slots__ generation will be useless... okay, maybe not useless -- it will make the class attributes read-only; probably not what you want.
The easy way is to say, "Okay, I'll initialize them to None, then let them disappear." Excellent! Here's one way to do that:
class B(object):
three = None
four = None
temp = vars() # get the local namespace as a dict()
__slots__ = temp.keys() # put their names into __slots__
__slots__.remove('temp') # remove non-__slots__ names
__slots__.remove('__module__') # now remove the names from the local
for name in __slots__: # namespace so we don't get read-only
del temp[name] # class attributes
del temp # and get rid of temp
If you want to keep those initial values it takes a bit more work... here's one possible solution:
class B(object):
three = 3
four = 4
def __init__(self):
for key, value in self.__init__.defaults.items():
setattr(self, key, value)
temp = vars()
__slots__ = temp.keys()
__slots__.remove('temp')
__slots__.remove('__module__')
__slots__.remove('__init__')
__init__.defaults = dict()
for name in __slots__:
__init__.defaults[name] = temp[name]
del temp[name]
del temp
As you can see, it is possible to do this without a metaclass -- but who wants all that boilerplate? A metaclass could definitely help us clean this up:
class MakeSlots(type):
def __new__(cls, name, bases, attrs):
new_attrs = {}
new_attrs['__slots__'] = slots = attrs.keys()
slots.remove('__module__')
slots.remove('__metaclass__')
new_attrs['__weakref__'] = None
new_attrs['__init__'] = init = new_init
init.defaults = dict()
for name in slots:
init.defaults[name] = attrs[name]
return super(MakeSlots, cls).__new__(cls, name, bases, new_attrs)
def new_init(self):
for key, value in self.__init__.defaults.items():
setattr(self, key, value)
class A(object):
__metaclass__ = MakeSlots
one = 1
two = 2
class B(object):
__metaclass__ = MakeSlots
three = 3
four = 4
Now all the tediousness is kept in the metaclass, and the actual class is easy to read and (hopefully!) understand.
If you need to have anything else in these classes besides attributes I strongly suggest you put whatever it is in a mixin class -- having them directly in the final class would complicate the metaclass even more.