Why is my recursion leading to self referencing dict values? - python

I aim to write a function that splits a budget across options by comparing options based on their benefit/cost ratio and stores them in a list of nested dicts. When multiple options with the same benefit/cost ratio are available, each option shall be pursued separately (as downstream element which in turn may have multiple downstream elements) and reflected as a list of dicts for its upstream dict. There is no limitation as to how many options may occur.
def get_all_allocation_proposals(budget, options, upstream_element=dict()):
# accepts:
# budget to be allocated
# list of options
# returns:
# a list of allocation proposals
# filter options for affordable options and sort by cost benefit ratio
options = [x for x in options if x['cost'] <= budget]
options = sorted(options, key=lambda x: (
x['benefit_cost_ratio'], x['benefit']), reverse=True)
if (len(options) > 0):
# select the best options
best_bc_ratio = options[0]['benefit_cost_ratio']
best_options = [
x for x in options if x['benefit_cost_ratio'] == best_bc_ratio]
upstream_element['downstream_elements'] = []
for current_element in best_options:
downstream_options = remove_conflicting_options(
current_element, options)
downstream_budget = budget - current_element['cost']
current_element['donstream_budget'] = downstream_budget
downstream_elements = get_all_allocation_proposals(downstream_budget,
downstream_options,
current_element)
if downstream_elements is not None:
current_element['downstream_elements'] = downstream_elements
upstream_element['downstream_elements'].append(current_element)
return upstream_element
else:
return None
In the code above, when the elements are appended, self referencing dict values are created. Why is that the case and how can I avoid that? All I want to do is to pass on all downstream elements to the first call stack.
Is there something fundamentally flawed with my recursion pattern?

I think the issue is probably because you are passing mutable objects into your recursive call. Specifically downstream_options and current_element are dicts and when you modify them within a given recursion of the function, you are also modifying them at the level above, which in this case seems to leave you attempting to assign a value in the dict to itself (or some such impossibility, I haven't quite managed to follow the logic through).
A quick solution might be (I'm not sure if this will break your logic) to make a copy of these dicts at each recursion:
from copy import deepcopy
...
downstream_elements = get_all_allocation_proposals(downstream_budget,
deepcopy(downstream_options),
deepcopy(current_element))
Additionally, as identified in the comments you should avoid having a mutable default argument, i.e. upstream_element=dict(). This can produce some very confusing behaviour if you actually use the default (which you don't appear to in your code)

Related

How add() on set can work in such dictionary? [duplicate]

The addition of collections.defaultdict in Python 2.5 greatly reduced the need for dict's setdefault method. This question is for our collective education:
What is setdefault still useful for, today in Python 2.6/2.7?
What popular use cases of setdefault were superseded with collections.defaultdict?
You could say defaultdict is useful for settings defaults before filling the dict and setdefault is useful for setting defaults while or after filling the dict.
Probably the most common use case: Grouping items (in unsorted data, else use itertools.groupby)
# really verbose
new = {}
for (key, value) in data:
if key in new:
new[key].append( value )
else:
new[key] = [value]
# easy with setdefault
new = {}
for (key, value) in data:
group = new.setdefault(key, []) # key might exist already
group.append( value )
# even simpler with defaultdict
from collections import defaultdict
new = defaultdict(list)
for (key, value) in data:
new[key].append( value ) # all keys have a default already
Sometimes you want to make sure that specific keys exist after creating a dict. defaultdict doesn't work in this case, because it only creates keys on explicit access. Think you use something HTTP-ish with many headers -- some are optional, but you want defaults for them:
headers = parse_headers( msg ) # parse the message, get a dict
# now add all the optional headers
for headername, defaultvalue in optional_headers:
headers.setdefault( headername, defaultvalue )
I commonly use setdefault for keyword argument dicts, such as in this function:
def notify(self, level, *pargs, **kwargs):
kwargs.setdefault("persist", level >= DANGER)
self.__defcon.set(level, **kwargs)
try:
kwargs.setdefault("name", self.client.player_entity().name)
except pytibia.PlayerEntityNotFound:
pass
return _notify(level, *pargs, **kwargs)
It's great for tweaking arguments in wrappers around functions that take keyword arguments.
defaultdict is great when the default value is static, like a new list, but not so much if it's dynamic.
For example, I need a dictionary to map strings to unique ints. defaultdict(int) will always use 0 for the default value. Likewise, defaultdict(intGen()) always produces 1.
Instead, I used a regular dict:
nextID = intGen()
myDict = {}
for lots of complicated stuff:
#stuff that generates unpredictable, possibly already seen str
strID = myDict.setdefault(myStr, nextID())
Note that dict.get(key, nextID()) is insufficient because I need to be able to refer to these values later as well.
intGen is a tiny class I build that automatically increments an int and returns its value:
class intGen:
def __init__(self):
self.i = 0
def __call__(self):
self.i += 1
return self.i
If someone has a way to do this with defaultdict I'd love to see it.
As most answers state setdefault or defaultdict would let you set a default value when a key doesn't exist. However, I would like to point out a small caveat with regard to the use cases of setdefault. When the Python interpreter executes setdefaultit will always evaluate the second argument to the function even if the key exists in the dictionary. For example:
In: d = {1:5, 2:6}
In: d
Out: {1: 5, 2: 6}
In: d.setdefault(2, 0)
Out: 6
In: d.setdefault(2, print('test'))
test
Out: 6
As you can see, print was also executed even though 2 already existed in the dictionary. This becomes particularly important if you are planning to use setdefault for example for an optimization like memoization. If you add a recursive function call as the second argument to setdefault, you wouldn't get any performance out of it as Python would always be calling the function recursively.
Since memoization was mentioned, a better alternative is to use functools.lru_cache decorator if you consider enhancing a function with memoization. lru_cache handles the caching requirements for a recursive function better.
I use setdefault() when I want a default value in an OrderedDict. There isn't a standard Python collection that does both, but there are ways to implement such a collection.
As Muhammad said, there are situations in which you only sometimes wish to set a default value. A great example of this is a data structure which is first populated, then queried.
Consider a trie. When adding a word, if a subnode is needed but not present, it must be created to extend the trie. When querying for the presence of a word, a missing subnode indicates that the word is not present and it should not be created.
A defaultdict cannot do this. Instead, a regular dict with the get and setdefault methods must be used.
Theoretically speaking, setdefault would still be handy if you sometimes want to set a default and sometimes not. In real life, I haven't come across such a use case.
However, an interesting use case comes up from the standard library (Python 2.6, _threadinglocal.py):
>>> mydata = local()
>>> mydata.__dict__
{'number': 42}
>>> mydata.__dict__.setdefault('widgets', [])
[]
>>> mydata.widgets
[]
I would say that using __dict__.setdefault is a pretty useful case.
Edit: As it happens, this is the only example in the standard library and it is in a comment. So may be it is not enough of a case to justify the existence of setdefault. Still, here is an explanation:
Objects store their attributes in the __dict__ attribute. As it happens, the __dict__ attribute is writeable at any time after the object creation. It is also a dictionary not a defaultdict. It is not sensible for objects in the general case to have __dict__ as a defaultdict because that would make each object having all legal identifiers as attributes. So I can't foresee any change to Python objects getting rid of __dict__.setdefault, apart from deleting it altogether if it was deemed not useful.
I rewrote the accepted answer and facile it for the newbies.
#break it down and understand it intuitively.
new = {}
for (key, value) in data:
if key not in new:
new[key] = [] # this is core of setdefault equals to new.setdefault(key, [])
new[key].append(value)
else:
new[key].append(value)
# easy with setdefault
new = {}
for (key, value) in data:
group = new.setdefault(key, []) # it is new[key] = []
group.append(value)
# even simpler with defaultdict
new = defaultdict(list)
for (key, value) in data:
new[key].append(value) # all keys have a default value of empty list []
Additionally,I categorized the methods as reference:
dict_methods_11 = {
'views':['keys', 'values', 'items'],
'add':['update','setdefault'],
'remove':['pop', 'popitem','clear'],
'retrieve':['get',],
'copy':['copy','fromkeys'],}
One drawback of defaultdict over dict (dict.setdefault) is that a defaultdict object creates a new item EVERYTIME non existing key is given (eg with ==, print). Also the defaultdict class is generally way less common then the dict class, its more difficult to serialize it IME.
P.S. IMO functions|methods not meant to mutate an object, should not mutate an object.
Here are some examples of setdefault to show its usefulness:
"""
d = {}
# To add a key->value pair, do the following:
d.setdefault(key, []).append(value)
# To retrieve a list of the values for a key
list_of_values = d[key]
# To remove a key->value pair is still easy, if
# you don't mind leaving empty lists behind when
# the last value for a given key is removed:
d[key].remove(value)
# Despite the empty lists, it's still possible to
# test for the existance of values easily:
if d.has_key(key) and d[key]:
pass # d has some values for key
# Note: Each value can exist multiple times!
"""
e = {}
print e
e.setdefault('Cars', []).append('Toyota')
print e
e.setdefault('Motorcycles', []).append('Yamaha')
print e
e.setdefault('Airplanes', []).append('Boeing')
print e
e.setdefault('Cars', []).append('Honda')
print e
e.setdefault('Cars', []).append('BMW')
print e
e.setdefault('Cars', []).append('Toyota')
print e
# NOTE: now e['Cars'] == ['Toyota', 'Honda', 'BMW', 'Toyota']
e['Cars'].remove('Toyota')
print e
# NOTE: it's still true that ('Toyota' in e['Cars'])
I use setdefault frequently when, get this, setting a default (!!!) in a dictionary; somewhat commonly the os.environ dictionary:
# Set the venv dir if it isn't already overridden:
os.environ.setdefault('VENV_DIR', '/my/default/path')
Less succinctly, this looks like this:
# Set the venv dir if it isn't already overridden:
if 'VENV_DIR' not in os.environ:
os.environ['VENV_DIR'] = '/my/default/path')
It's worth noting that you can also use the resulting variable:
venv_dir = os.environ.setdefault('VENV_DIR', '/my/default/path')
But that's less necessary than it was before defaultdicts existed.
Another use case that I don't think was mentioned above.
Sometimes you keep a cache dict of objects by their id where primary instance is in the cache and you want to set cache when missing.
return self.objects_by_id.setdefault(obj.id, obj)
That's useful when you always want to keep a single instance per distinct id no matter how you obtain an obj each time. For example when object attributes get updated in memory and saving to storage is deferred.
One very important use-case I just stumbled across: dict.setdefault() is great for multi-threaded code when you only want a single canonical object (as opposed to multiple objects that happen to be equal).
For example, the (Int)Flag Enum in Python 3.6.0 has a bug: if multiple threads are competing for a composite (Int)Flag member, there may end up being more than one:
from enum import IntFlag, auto
import threading
class TestFlag(IntFlag):
one = auto()
two = auto()
three = auto()
four = auto()
five = auto()
six = auto()
seven = auto()
eight = auto()
def __eq__(self, other):
return self is other
def __hash__(self):
return hash(self.value)
seen = set()
class cycle_enum(threading.Thread):
def run(self):
for i in range(256):
seen.add(TestFlag(i))
threads = []
for i in range(8):
threads.append(cycle_enum())
for t in threads:
t.start()
for t in threads:
t.join()
len(seen)
# 272 (should be 256)
The solution is to use setdefault() as the last step of saving the computed composite member -- if another has already been saved then it is used instead of the new one, guaranteeing unique Enum members.
In addition to what have been suggested, setdefault might be useful in situations where you don't want to modify a value that has been already set. For example, when you have duplicate numbers and you want to treat them as one group. In this case, if you encounter a repeated duplicate key which has been already set, you won't update the value of that key. You will keep the first encountered value. As if you are iterating/updating the repeated keys once only.
Here's a code example of recording the index for the keys/elements of a sorted list:
nums = [2,2,2,2,2]
d = {}
for idx, num in enumerate(sorted(nums)):
# This will be updated with the value/index of the of the last repeated key
# d[num] = idx # Result (sorted_indices): [4, 4, 4, 4, 4]
# In the case of setdefault, all encountered repeated keys won't update the key.
# However, only the first encountered key's index will be set
d.setdefault(num,idx) # Result (sorted_indices): [0, 0, 0, 0, 0]
sorted_indices = [d[i] for i in nums]
[Edit] Very wrong! The setdefault would always trigger long_computation, Python being eager.
Expanding on Tuttle's answer. For me the best use case is cache mechanism. Instead of:
if x not in memo:
memo[x]=long_computation(x)
return memo[x]
which consumes 3 lines and 2 or 3 lookups, I would happily write :
return memo.setdefault(x, long_computation(x))
I like the answer given here:
http://stupidpythonideas.blogspot.com/2013/08/defaultdict-vs-setdefault.html
In short, the decision (in non-performance-critical apps) should be made on the basis of how you want to handle lookup of empty keys downstream (viz. KeyError versus default value).
The different use case for setdefault() is when you don't want to overwrite the value of an already set key. defaultdict overwrites, while setdefault() does not. For nested dictionaries it is more often the case that you want to set a default only if the key is not set yet, because you don't want to remove the present sub dictionary. This is when you use setdefault().
Example with defaultdict:
>>> from collection import defaultdict()
>>> foo = defaultdict()
>>> foo['a'] = 4
>>> foo['a'] = 2
>>> print(foo)
defaultdict(None, {'a': 2})
setdefault doesn't overwrite:
>>> bar = dict()
>>> bar.setdefault('a', 4)
>>> bar.setdefault('a', 2)
>>> print(bar)
{'a': 4}
Another usecase for setdefault in CPython is that it is atomic in all cases, whereas defaultdict will not be atomic if you use a default value created from a lambda.
cache = {}
def get_user_roles(user_id):
if user_id in cache:
return cache[user_id]['roles']
cache.setdefault(user_id, {'lock': threading.Lock()})
with cache[user_id]['lock']:
roles = query_roles_from_database(user_id)
cache[user_id]['roles'] = roles
If two threads execute cache.setdefault at the same time, only one of them will be able to create the default value.
If instead you used a defaultdict:
cache = defaultdict(lambda: {'lock': threading.Lock()}
This would result in a race condition. In my example above, the first thread could create a default lock, and the second thread could create another default lock, and then each thread could lock its own default lock, instead of the desired outcome of each thread attempting to lock a single lock.
Conceptually, setdefault basically behaves like this (defaultdict also behaves like this if you use an empty list, empty dict, int, or other default value that is not user python code like a lambda):
gil = threading.Lock()
def setdefault(dict, key, value_func):
with gil:
if key not in dict:
return
value = value_func()
dict[key] = value
Conceptually, defaultdict basically behaves like this (only when using python code like a lambda - this is not true if you use an empty list):
gil = threading.Lock()
def __setitem__(dict, key, value_func):
with gil:
if key not in dict:
return
value = value_func()
with gil:
dict[key] = value

Python: searching through set of objects

I have a set of objects W that have the attributes name and a score. The __hash__() function is based upon the name only, and the __eq__() function is not defined, so it is based upon the __hash__() function.
Now, I want to use the score of the object. Is there a quicker way to reference to an instance than the following script? Given the way a set works, there must be...
tmp_obj = W(name="myname", score=0)
for obj in w_set:
if obj == tmp_obj: break
else:
# do nothing with obj
# do something with obj.score
You can use the in operator to check for set membership. This is a constant time operation in sets and dictionaries, since they are implemented as hash tables. For lists and tuples in is linear time.
obj = W("myname", 0)
if obj in w_set:
# do something with obj
You don't say how you set up your object, but why not just use if obj.score == 0?
for obj in w_set:
if obj.score == 0:
break
Or perhaps your question is about avoiding the linear search?
If you have a lot of objects and you'll be doing a lot of searches by score, you need to build an index mapping scores to objects. Presumably several objects could have the same score, so we'll build a list for each score (a set would also work):
from collections import defaultdict
score_index = defaultdict(list)
for obj in w_set:
score_index[obj.score].append(obj)
You can now loop over the list of all objects with score zero without searching:
for obj in score_index[0]:
# Do something

It's possibile to use set function on a object basing only one attribute?

I'm creating this type of object:
class start_url_mod ():
link = ""
id = 0
data = ""
I'm creating a list of this object and I want to know if there is some way in order to delete one of then if I find same link attribute.
I know the function set() for the deleting of duplicates in a "sample" list, but there is something very fast and computational acceptable?
Use a dict key-ed on the attribute. You can preserve order with collections.OrderedDict:
from collections import OrderedDict
# Keep the last copy with a given link
kept_last = OrderedDict((x.link, x) for x in nonuniquelist).values()
# Keep the first copy with a given link (still preserving input order)
kept_first = list(reversed(OrderedDict((x.link, x) for x in reversed(nonuniquelist)).viewvalues()))
If order is not important, plain dict via dict comprehensions is significantly faster in Python 2.7 (because OrderedDict is implemented in Python, not C, and because dict comprehensions are optimized more than constructor calls; in Python 3.5 it's implemented in C):
# Keep the last copy with a given link but order not preserved in result
kept_last = {x.link: x for x in nonuniquelist}.values()
# Keep the first copy with a given link but order not preserved in result
kept_first = {x.link: x for x in reversed(nonuniquelist)}.values()
You can use a dictionary with the attribute that you're interested in being the key ...

Python: Iterating through tuples which are related as subclassed tuples

I have the following tuples:
ReadElement = namedtuple('ReadElement', 'address value size')
LookupElement = namedtuple('LookupElement', ReadElement._fields[0:2] + ('lookups', ReadElement._fields[2]))
and I want to iterate through them as follows:
mytuples = [ReadElement(1,2,3), LookupElement(1,2,3,4)]
for address, value, lookups?, size in mytuples
if lookups is not None:
addLookups(lookups)
print address, value, lookups?, size
def addLookups(*items):
return sum(items)
How could I iterate through similar tuples using the same piece of code?
I think what I am looking for is a Union type of the two named tuples, so that that union type preserves the order of the tuples in the loop.
From laike9m post I can see how I can use the isinstance operator without having to unpack the tuples in the loop however I would like to avoid special casing the data and just go straight through without any if statements.
If these were objects I could do something like mytuples[0].execute() without having to worry about what type they were as long as they were were subclassed from the same parent and had that method implemented.
It seems that my question maybe a variant of the following Why are Super-class and Sub-class reversed? . In the case above I only have two items one subclass and one superclass where they are very similar to each other and therefore could also be made into a single class.
First, your namedtuple definition is wrong, should be:
LookupElement = namedtuple('LookupElement', ReadElement._fields[0:2] + ('lookups', ReadElement._fields[2]))
Second, you don't need to do worry about all that:
>>> for nt in mytuples:
print(nt)
ReadElement(address=1, value=2, size=3)
LookupElement(address=1, value=2, lookups=3, size=4)
I'm going to sleep so maybe I can't answer your futher question. I think the best way is to check whether the field you want exists before using it.
I don't know exactly what you want, here's what I'll do:
mytuples = [ReadElement(1,2,3), LookupElement(1,2,3,4)]
for nt in mytuples
if 'lookups' in nt._fields:
print nt.address, nt.value, nt.lookups, nt.size
else:
print nt.address, nt.value, nt.size

Finding partial strings in a list of strings - python

I am trying to check if a user is a member of an Active Directory group, and I have this:
ldap.set_option(ldap.OPT_REFERRALS, 0)
try:
con = ldap.initialize(LDAP_URL)
con.simple_bind_s(userid+"#"+ad_settings.AD_DNS_NAME, password)
ADUser = con.search_ext_s(ad_settings.AD_SEARCH_DN, ldap.SCOPE_SUBTREE, \
"sAMAccountName=%s" % userid, ad_settings.AD_SEARCH_FIELDS)[0][1]
except ldap.LDAPError:
return None
ADUser returns a list of strings:
{'givenName': ['xxxxx'],
'mail': ['xxxxx#example.com'],
'memberOf': ['CN=group1,OU=Projects,OU=Office,OU=company,DC=domain,DC=com',
'CN=group2,OU=Projects,OU=Office,OU=company,DC=domain,DC=com',
'CN=group3,OU=Projects,OU=Office,OU=company,DC=domain,DC=com',
'CN=group4,OU=Projects,OU=Office,OU=company,DC=domain,DC=com'],
'sAMAccountName': ['myloginid'],
'sn': ['Xxxxxxxx']}
Of course in the real world the group names are verbose and of varied structure, and users will belong to tens or hundreds of groups.
If I get the list of groups out as ADUser.get('memberOf')[0], what is the best way to check if any members of a separate list exist in the main list?
For example, the check list would be ['group2', 'group16'] and I want to get a true/false answer as to whether any of the smaller list exist in the main list.
If the format example you give is somewhat reliable, something like:
import re
grps = re.compile(r'CN=(\w+)').findall
def anyof(short_group_list, adu):
all_groups_of_user = set(g for gs in adu.get('memberOf',()) for g in grps(gs))
return sorted(all_groups_of_user.intersection(short_group_list))
where you pass your list such as ['group2', 'group16'] as the first argument, your ADUser dict as the second argument; this returns an alphabetically sorted list (possibly empty, meaning "none") of the groups, among those in short_group_list, to which the user belongs.
It's probably not much faster to just a bool, but, if you insist, changing the second statement of the function to:
return any(g for g in short_group_list if g in all_groups_of_user)
might possibly save a certain amount of time in the "true" case (since any short-circuits) though I suspect not in the "false" case (where the whole list must be traversed anyway). If you care about the performance issue, best is to benchmark both possibilities on data that's realistic for your use case!
If performance isn't yet good enough (and a bool yes/no is sufficient, as you say), try reversing the looping logic:
def anyof_v2(short_group_list, adu):
gset = set(short_group_list)
return any(g for gs in adu.get('memberOf',()) for g in grps(gs) if g in gset)
any's short-circuit abilities might prove more useful here (at least in the "true" case, again -- because, again, there's no way to give a "false" result without examining ALL the possibilities anyway!-).
You can use set intersection (& operator) once you parse the group list out. For example:
> memberOf = 'CN=group1,OU=Projects,OU=Office,OU=company,DC=domain,DC=com'
> groups = [token.split('=')[1] for token in memberOf.split(',')]
> groups
['group1', 'Projects', 'Office', 'company', 'domain', 'com']
> checklist1 = ['group1', 'group16']
> set(checklist1) & set(groups)
set(['group1'])
> checklist2 = ['group2', 'group16']
> set(checklist2) & set(groups)
set([])
Note that a conditional evaluation on a set works the same as for lists and tuples. True if there are any elements in the set, False otherwise. So, "if set(checklist2) & set(groups): ..." would not execute since the condition evaluates to False in the above example (the opposite is true for the checklist1 test).
Also see:
http://docs.python.org/library/sets.html

Categories