Unwanted general class dictionaries updating instead of single class dictionary updating python - python

I tryed to read something on the topic but I cannot figure out a possible solution.
I have a dictionary of this type:
class flux(object):
def __init__(self, count_flux=0, ip_c_dict=defaultdict(int), ip_s_dict=defaultdict(int), conn_dict=defaultdict(int)):
self.count_flux = count_flux
self.ip_c_dict = ip_c_dict if ip_c_dict is not None else {}
self.ip_s_dict = ip_s_dict if ip_s_dict is not None else {}
self.conn_dict = conn_dict if conn_dict is not None else {}
Every time I try to update the dictionary in this way:
dictionary[key].ip_c_dict[some_string]+=1
not only the dictionary of the current key is updated, but also all the others. And of course it happens with all the three dictionary in the class, ip_c_dict=defaultdict(int), ip_s_dict=defaultdict(int), conn_dict=defaultdict(int).
How can I fix it?

I said in that answer that you shouldn't put dicts in the default arguments, because then the dicts end up shared between all the instances. The defaultdict(int) in the default argument is evaluated only once (when the method is first created) and then all the times the method is called use that same dict as the default.
So put back ip_c_dict=None in the argument list, and below put
self.ip_c_dict = ip_c_dict if ip_c_dict is not None else defaultdict(int)
That way a new defaultdict(int) is created each time, if the ip_c_dict argument is None.

Related

how to safely append to a dictionary of dictionaries

I know there's a similar question:
How to append to a dictionary of dictionaries
but the answers aren't working for me. My problem is as follows. If I need to add a new key: value pair to a Python dictionary, then
my_dict[key] = value
is always safe (as long as key is not a mutable type), whether my_dict had been already initialized or not.
However, if I want my_dict to be a dictionary of dictionaries, then
my_dict[keyA][keyB] = value
doesn't work, unless I already initialized my_dict[keyA] as an empty dictionary. So what I'm doing right now is:
class dict_of_dict():
def __init__(self):
self.ddict = {}
def update(self, keyA, keyB, value):
if not(keyA in self.ddict.keys()):
self.ddict[keyA] = {}
self.ddict[keyA][keyB] = value
a = dict_of_dict()
a.update(0, 3, "foobar")
a.ddict
This works, but I feel like it's overkill. Is there a more compact/Pythonic but still readable solution?

How add() on set can work in such dictionary? [duplicate]

The addition of collections.defaultdict in Python 2.5 greatly reduced the need for dict's setdefault method. This question is for our collective education:
What is setdefault still useful for, today in Python 2.6/2.7?
What popular use cases of setdefault were superseded with collections.defaultdict?
You could say defaultdict is useful for settings defaults before filling the dict and setdefault is useful for setting defaults while or after filling the dict.
Probably the most common use case: Grouping items (in unsorted data, else use itertools.groupby)
# really verbose
new = {}
for (key, value) in data:
if key in new:
new[key].append( value )
else:
new[key] = [value]
# easy with setdefault
new = {}
for (key, value) in data:
group = new.setdefault(key, []) # key might exist already
group.append( value )
# even simpler with defaultdict
from collections import defaultdict
new = defaultdict(list)
for (key, value) in data:
new[key].append( value ) # all keys have a default already
Sometimes you want to make sure that specific keys exist after creating a dict. defaultdict doesn't work in this case, because it only creates keys on explicit access. Think you use something HTTP-ish with many headers -- some are optional, but you want defaults for them:
headers = parse_headers( msg ) # parse the message, get a dict
# now add all the optional headers
for headername, defaultvalue in optional_headers:
headers.setdefault( headername, defaultvalue )
I commonly use setdefault for keyword argument dicts, such as in this function:
def notify(self, level, *pargs, **kwargs):
kwargs.setdefault("persist", level >= DANGER)
self.__defcon.set(level, **kwargs)
try:
kwargs.setdefault("name", self.client.player_entity().name)
except pytibia.PlayerEntityNotFound:
pass
return _notify(level, *pargs, **kwargs)
It's great for tweaking arguments in wrappers around functions that take keyword arguments.
defaultdict is great when the default value is static, like a new list, but not so much if it's dynamic.
For example, I need a dictionary to map strings to unique ints. defaultdict(int) will always use 0 for the default value. Likewise, defaultdict(intGen()) always produces 1.
Instead, I used a regular dict:
nextID = intGen()
myDict = {}
for lots of complicated stuff:
#stuff that generates unpredictable, possibly already seen str
strID = myDict.setdefault(myStr, nextID())
Note that dict.get(key, nextID()) is insufficient because I need to be able to refer to these values later as well.
intGen is a tiny class I build that automatically increments an int and returns its value:
class intGen:
def __init__(self):
self.i = 0
def __call__(self):
self.i += 1
return self.i
If someone has a way to do this with defaultdict I'd love to see it.
As most answers state setdefault or defaultdict would let you set a default value when a key doesn't exist. However, I would like to point out a small caveat with regard to the use cases of setdefault. When the Python interpreter executes setdefaultit will always evaluate the second argument to the function even if the key exists in the dictionary. For example:
In: d = {1:5, 2:6}
In: d
Out: {1: 5, 2: 6}
In: d.setdefault(2, 0)
Out: 6
In: d.setdefault(2, print('test'))
test
Out: 6
As you can see, print was also executed even though 2 already existed in the dictionary. This becomes particularly important if you are planning to use setdefault for example for an optimization like memoization. If you add a recursive function call as the second argument to setdefault, you wouldn't get any performance out of it as Python would always be calling the function recursively.
Since memoization was mentioned, a better alternative is to use functools.lru_cache decorator if you consider enhancing a function with memoization. lru_cache handles the caching requirements for a recursive function better.
I use setdefault() when I want a default value in an OrderedDict. There isn't a standard Python collection that does both, but there are ways to implement such a collection.
As Muhammad said, there are situations in which you only sometimes wish to set a default value. A great example of this is a data structure which is first populated, then queried.
Consider a trie. When adding a word, if a subnode is needed but not present, it must be created to extend the trie. When querying for the presence of a word, a missing subnode indicates that the word is not present and it should not be created.
A defaultdict cannot do this. Instead, a regular dict with the get and setdefault methods must be used.
Theoretically speaking, setdefault would still be handy if you sometimes want to set a default and sometimes not. In real life, I haven't come across such a use case.
However, an interesting use case comes up from the standard library (Python 2.6, _threadinglocal.py):
>>> mydata = local()
>>> mydata.__dict__
{'number': 42}
>>> mydata.__dict__.setdefault('widgets', [])
[]
>>> mydata.widgets
[]
I would say that using __dict__.setdefault is a pretty useful case.
Edit: As it happens, this is the only example in the standard library and it is in a comment. So may be it is not enough of a case to justify the existence of setdefault. Still, here is an explanation:
Objects store their attributes in the __dict__ attribute. As it happens, the __dict__ attribute is writeable at any time after the object creation. It is also a dictionary not a defaultdict. It is not sensible for objects in the general case to have __dict__ as a defaultdict because that would make each object having all legal identifiers as attributes. So I can't foresee any change to Python objects getting rid of __dict__.setdefault, apart from deleting it altogether if it was deemed not useful.
I rewrote the accepted answer and facile it for the newbies.
#break it down and understand it intuitively.
new = {}
for (key, value) in data:
if key not in new:
new[key] = [] # this is core of setdefault equals to new.setdefault(key, [])
new[key].append(value)
else:
new[key].append(value)
# easy with setdefault
new = {}
for (key, value) in data:
group = new.setdefault(key, []) # it is new[key] = []
group.append(value)
# even simpler with defaultdict
new = defaultdict(list)
for (key, value) in data:
new[key].append(value) # all keys have a default value of empty list []
Additionally,I categorized the methods as reference:
dict_methods_11 = {
'views':['keys', 'values', 'items'],
'add':['update','setdefault'],
'remove':['pop', 'popitem','clear'],
'retrieve':['get',],
'copy':['copy','fromkeys'],}
One drawback of defaultdict over dict (dict.setdefault) is that a defaultdict object creates a new item EVERYTIME non existing key is given (eg with ==, print). Also the defaultdict class is generally way less common then the dict class, its more difficult to serialize it IME.
P.S. IMO functions|methods not meant to mutate an object, should not mutate an object.
Here are some examples of setdefault to show its usefulness:
"""
d = {}
# To add a key->value pair, do the following:
d.setdefault(key, []).append(value)
# To retrieve a list of the values for a key
list_of_values = d[key]
# To remove a key->value pair is still easy, if
# you don't mind leaving empty lists behind when
# the last value for a given key is removed:
d[key].remove(value)
# Despite the empty lists, it's still possible to
# test for the existance of values easily:
if d.has_key(key) and d[key]:
pass # d has some values for key
# Note: Each value can exist multiple times!
"""
e = {}
print e
e.setdefault('Cars', []).append('Toyota')
print e
e.setdefault('Motorcycles', []).append('Yamaha')
print e
e.setdefault('Airplanes', []).append('Boeing')
print e
e.setdefault('Cars', []).append('Honda')
print e
e.setdefault('Cars', []).append('BMW')
print e
e.setdefault('Cars', []).append('Toyota')
print e
# NOTE: now e['Cars'] == ['Toyota', 'Honda', 'BMW', 'Toyota']
e['Cars'].remove('Toyota')
print e
# NOTE: it's still true that ('Toyota' in e['Cars'])
I use setdefault frequently when, get this, setting a default (!!!) in a dictionary; somewhat commonly the os.environ dictionary:
# Set the venv dir if it isn't already overridden:
os.environ.setdefault('VENV_DIR', '/my/default/path')
Less succinctly, this looks like this:
# Set the venv dir if it isn't already overridden:
if 'VENV_DIR' not in os.environ:
os.environ['VENV_DIR'] = '/my/default/path')
It's worth noting that you can also use the resulting variable:
venv_dir = os.environ.setdefault('VENV_DIR', '/my/default/path')
But that's less necessary than it was before defaultdicts existed.
Another use case that I don't think was mentioned above.
Sometimes you keep a cache dict of objects by their id where primary instance is in the cache and you want to set cache when missing.
return self.objects_by_id.setdefault(obj.id, obj)
That's useful when you always want to keep a single instance per distinct id no matter how you obtain an obj each time. For example when object attributes get updated in memory and saving to storage is deferred.
One very important use-case I just stumbled across: dict.setdefault() is great for multi-threaded code when you only want a single canonical object (as opposed to multiple objects that happen to be equal).
For example, the (Int)Flag Enum in Python 3.6.0 has a bug: if multiple threads are competing for a composite (Int)Flag member, there may end up being more than one:
from enum import IntFlag, auto
import threading
class TestFlag(IntFlag):
one = auto()
two = auto()
three = auto()
four = auto()
five = auto()
six = auto()
seven = auto()
eight = auto()
def __eq__(self, other):
return self is other
def __hash__(self):
return hash(self.value)
seen = set()
class cycle_enum(threading.Thread):
def run(self):
for i in range(256):
seen.add(TestFlag(i))
threads = []
for i in range(8):
threads.append(cycle_enum())
for t in threads:
t.start()
for t in threads:
t.join()
len(seen)
# 272 (should be 256)
The solution is to use setdefault() as the last step of saving the computed composite member -- if another has already been saved then it is used instead of the new one, guaranteeing unique Enum members.
In addition to what have been suggested, setdefault might be useful in situations where you don't want to modify a value that has been already set. For example, when you have duplicate numbers and you want to treat them as one group. In this case, if you encounter a repeated duplicate key which has been already set, you won't update the value of that key. You will keep the first encountered value. As if you are iterating/updating the repeated keys once only.
Here's a code example of recording the index for the keys/elements of a sorted list:
nums = [2,2,2,2,2]
d = {}
for idx, num in enumerate(sorted(nums)):
# This will be updated with the value/index of the of the last repeated key
# d[num] = idx # Result (sorted_indices): [4, 4, 4, 4, 4]
# In the case of setdefault, all encountered repeated keys won't update the key.
# However, only the first encountered key's index will be set
d.setdefault(num,idx) # Result (sorted_indices): [0, 0, 0, 0, 0]
sorted_indices = [d[i] for i in nums]
[Edit] Very wrong! The setdefault would always trigger long_computation, Python being eager.
Expanding on Tuttle's answer. For me the best use case is cache mechanism. Instead of:
if x not in memo:
memo[x]=long_computation(x)
return memo[x]
which consumes 3 lines and 2 or 3 lookups, I would happily write :
return memo.setdefault(x, long_computation(x))
I like the answer given here:
http://stupidpythonideas.blogspot.com/2013/08/defaultdict-vs-setdefault.html
In short, the decision (in non-performance-critical apps) should be made on the basis of how you want to handle lookup of empty keys downstream (viz. KeyError versus default value).
The different use case for setdefault() is when you don't want to overwrite the value of an already set key. defaultdict overwrites, while setdefault() does not. For nested dictionaries it is more often the case that you want to set a default only if the key is not set yet, because you don't want to remove the present sub dictionary. This is when you use setdefault().
Example with defaultdict:
>>> from collection import defaultdict()
>>> foo = defaultdict()
>>> foo['a'] = 4
>>> foo['a'] = 2
>>> print(foo)
defaultdict(None, {'a': 2})
setdefault doesn't overwrite:
>>> bar = dict()
>>> bar.setdefault('a', 4)
>>> bar.setdefault('a', 2)
>>> print(bar)
{'a': 4}
Another usecase for setdefault in CPython is that it is atomic in all cases, whereas defaultdict will not be atomic if you use a default value created from a lambda.
cache = {}
def get_user_roles(user_id):
if user_id in cache:
return cache[user_id]['roles']
cache.setdefault(user_id, {'lock': threading.Lock()})
with cache[user_id]['lock']:
roles = query_roles_from_database(user_id)
cache[user_id]['roles'] = roles
If two threads execute cache.setdefault at the same time, only one of them will be able to create the default value.
If instead you used a defaultdict:
cache = defaultdict(lambda: {'lock': threading.Lock()}
This would result in a race condition. In my example above, the first thread could create a default lock, and the second thread could create another default lock, and then each thread could lock its own default lock, instead of the desired outcome of each thread attempting to lock a single lock.
Conceptually, setdefault basically behaves like this (defaultdict also behaves like this if you use an empty list, empty dict, int, or other default value that is not user python code like a lambda):
gil = threading.Lock()
def setdefault(dict, key, value_func):
with gil:
if key not in dict:
return
value = value_func()
dict[key] = value
Conceptually, defaultdict basically behaves like this (only when using python code like a lambda - this is not true if you use an empty list):
gil = threading.Lock()
def __setitem__(dict, key, value_func):
with gil:
if key not in dict:
return
value = value_func()
with gil:
dict[key] = value

Prevent evaluating default function in dictionary.get or dictionary.setdefault for existing keys

I'd like to keep track of key-value pairs I've processed already in a dictionary (or something else if it's better), where key is some input and value is the return output of some complex function/calculation. The main purpose is to prevent doing the same process over again if I wish to get the value for a key that has been seen before. I've tried using setdefault and get to solve this problem, but the function I call ends up getting executed regardless if the key exists in the dictionary.
Sample code:
def complex_function(some_key):
"""
Complex calculations using some_key
"""
return some_value
# Get my_key's value in my_dict. If my_key has not been seen yet,
# calculate its value and set it to my_dict[my_key]
my_value = my_dict.setdefault(my_key, complex_function(my_key))
complex_function ends up getting carried out regardless if my_key is in my_dict. I've also tried using my_dict.get(my_key, complex_function(my_key)) with the same result. For now, this is my fixed solution:
if my_key not in my_dict:
my_dict[my_key] = complex_function(my_key)
my_value = my_dict[my_key]
Here are my questions. First, is using a dictionary for this purpose the right approach? Second, am I using setdefault correctly? And third, is my current fix a good solution to the problem? (I end up calling my_dict[my_key] twice if my_key doesn't exist)
So I went ahead and took Vincent's suggestion of using a decorator.
Here's what the new fix looks like:
import functools
#functools.lru_cache(maxsize=16)
def complex_function(some_input):
"""
Complex calculations using some_input
"""
return some_value
my_value = complex_function(some_input)
From what I understand so far, lru_cache uses a dictionary to cache the results. The key in this dictionary refers to argument(s) to the decorated function (some_input) and the value refers to the return value of the decorated function (some_value). So, if the function gets called with an argument that's previously been passed before, it would simply return the value referenced in the decorator's dictionary instead of running the function. If the argument hasn't been seen, the function proceeds as normal, and in addition, the decorator creates a new key-value pair in its dictionary.
I set the maxsize to 16 for now as I don't expect some_input to represent more than 10 unique values. One thing to note is that the arguments for the decorated function are required to be non-mutable and hashable, as it uses the arguments as keys for its dictionary.
original_dict = {"a" : "apple", "b" : "banana", "c" : "cat"}
keys = a.keys()
new_dict = {}
For every key that you access now, run the following command :
new_dict[key] = value
To check if you have already accessed a key, run the following code :
#if new_key is not yet accessed
if not new_key in new_dict.keys() :
#read the value of new_key from original_dict and write to new_dict
new_dict[new_key] = original_dict[new_key]
I hope this helps
Your current solution is fine. You are creating slightly more work, but significantly reducing the computational workload when the key is already present.
However, defaultdict is almost what you need here. By modifying it a little bit we can make it work exactly as you want.
from collections import defaultdict
class DefaultKeyDict(defaultdict):
def __missing__(self, key):
if self.default_factory is None:
raise KeyError(key)
self[key] = value = self.default_factory(key)
return value
d = DefaultKeyDict(lambda key: key * 2)
assert d[1] == 2
print(d)

Unique constant reference

Let's take as an example the following code :
ALL = "everything"
my_dict = {"random":"values"}
def get_values(keys):
if keys is None:
return {}
if keys is ALL:
return my_dict
if not hasattr(keys, '__iter__')
keys = [keys]
return {key: my_dict[key] for key in keys}
The function get_values returns a dict with the given key, or keys if the parameter is an iterable, an empty dictionary if the parameter is None or the whole dictionary if the parameter is the constant ALL.
The problem with this happens when you would want to return a key called "everything". Python might use the same reference for ALL and the parameter (since they're both the same immutable), which would make the keys is ALL expression True. The function will therefore return the whole dict, so not the intended behavior.
It would be possible to assign ALL to an instance object of a class defined specifically for that purpose, or to use the type method to generate an object inline, which would make ALL a unique reference. Both solutions seem a little overkill though.
I could also use a flag in the function declaration (i.e. : def get_values(keys, all=False)), but then I can always derive the value of a parameter from the other (if all is True, then keys is None, if keys is not None, then All is not False), so it seems overly verbose.
What is your opinion on the previously mentioned techniques, and do you see other possible ways of fixing this ?
Don't use a value that could be (without extreme effort) a valid key as the sentinel.
ALL = object()
However, it seems much simpler to define the function to take a (possibly empty) sequence of keys.
def get_values(keys=None):
if keys is None:
keys = []
rv = {}
for key in keys:
# Keep in mind, this is a reference to
# an object in my_dict, not a copy. Also,
# you may want to handle keys not found in my_dict:
# ignore them, or set rv[key] to None?
rv[key] = my_dict[key]
return rv
d1 = get_all_values() # Empty dict
d2 = get_all_values([]) # Explicitly empty dict
d3 = get_all_values(["foo", "bar"]) # (Sub)set of values
d4 = get_all_values(my_dict) # A copy of my_dict
In the last case, we take advantage of the fact that get_all_values can take any iterable, and an iterator over a dict iterates over its keys.

Python creating dictionary key from a list of items

I wish to use a Python dictionary to keep track of some running tasks. Each of these tasks has a number of attributes which makes it unique, so I'd like to use a function of these attributes to generate the dictionary keys, so that I can find them in the dictionary again by using the same attributes; something like the following:
class Task(object):
def __init__(self, a, b):
pass
#Init task dictionary
d = {}
#Define some attributes
attrib_a = 1
attrib_b = 10
#Create a task with these attributes
t = Task(attrib_a, attrib_b)
#Store the task in the dictionary, using a function of the attributes as a key
d[[attrib_a, attrib_b]] = t
Obviously this doesn't work (the list is mutable, and so can't be used as a key ("unhashable type: list")) - so what's the canonical way of generating a unique key from several known attributes?
Use a tuple in place of the list. Tuples are immutable and can be used as dictionary keys:
d[(attrib_a, attrib_b)] = t
The parentheses can be omitted:
d[attrib_a, attrib_b] = t
However, some people seem to dislike this syntax.
Use a tuple
d[(attrib_a, attrib_b)] = t
That should work fine

Categories