Efficient grouping into dict - python

I have a list of tuples:
[('Player1', 'A', 1, 100),
('Player1', 'B', 15, 100),
('Player2', 'A', 7, 100),
('Player2', 'B', 65, 100),
('Global Total', None, 88, 100)]
Which I wish to convert to a dict in the following format:
{
'Player1': {
'A': [1, 12.5],
'B': [15, 18.75],
'Total': [16, 18.18]
},
'Player2': {
'A': [7, 87.5],
'B': [65, 81.25],
'Total': [72, 81.81]
},
'Global Total': {
'A': [8, 100],
'B': [80, 100]
}
}
So each Player dict has it's local total value and it's percentage according to it's global total value.
Currently, I do it like this:
fixed_vals = {}
for name, status, qtd, prct in data_set: # This is the list of tuples var
if name in fixed_vals:
fixed_vals[name].update({status: [qtd, prct]})
else:
fixed_vals[name] = {status: [qtd, prct]}
fixed_vals['Global Total']['Total'] = fixed_vals['Global Total'].pop(None)
total_a = 0
for k, v in fixed_vals.items():
if k != 'Global Total':
total_a += v['A'][0]
fixed_vals['Global Total']['A'] = [
total_a, total_a * 100 / fixed_vals['Global Total']['Total'][0]
]
fixed_vals['Global Total']['B'] = [
fixed_vals['Global Total']['Total'][0] - total_a,
fixed_vals['Global Total']['Total'][0] - fixed_vals['Global Total']['A'][1]
]
for player, vals in fixed_vals.items():
if player != 'Global Total':
vals['A'][1] = vals['A'][0] * 100 / fixed_vals['Global Total']['A'][0]
vals['B'][1] = fixed_vals['Global Total']['A'][1] - vals['B'][1]
the problem being that this is not very flexible since I have to do something similar to this,
but with almost 12 categories (A, B, ...)
Is there a better approach to this? Perhaps this is trivial with pandas?
Edit for clarification:
There are no duplicate categories for each Player, everyone of them has the same sequence (some might have 0 but the category is unique)

Everyone seems attracted to a dict-only solution, but why not try converting to pandas?
import pandas as pd
# given
tuple_list = [('Player1', 'A', 1, 100),
('Player1', 'B', 15, 100),
('Player2', 'A', 7, 100),
('Player2', 'B', 65, 100),
('Global Total', None, 88, 100)]
# make a dataframe
df = pd.DataFrame(tuple_list , columns = ['player', 'game','score', 'pct'])
del df['pct']
df = df[df.player!='Global Total']
df = df.pivot(index='player', columns='game', values='score')
df.columns.name=''
df.index.name=''
# just a check
assert df.to_dict() == {'A': {'Player1': 1, 'Player2': 7},
'B': {'Player1': 15, 'Player2': 65}}
# A B
#player
#Player1 1 15
#Player2 7 65
print('Obtained dataset:\n', df)
Basically, all you need is 'df' dataframe, and the rest you can
compute and add later, no need to save it to dictionary.
Below is updated on OP request:
# the sum across columns is this - this was the 'Grand Total' in the dicts
# A 8
# B 80
sum_col = df.sum(axis=0)
# lets calculate the share of each player score:
shares = df / df.sum(axis=0) * 100
assert shares.transpose().to_dict() == {'Player1': {'A': 12.5, 'B': 18.75},
'Player2': {'A': 87.5, 'B': 81.25}}
# in 'shares' the columns add to 100%:
# A B
#player
#Player1 12.50 18.75
#Player2 87.50 81.25
# lets mix up a dataframe close to original dictionary structure
mixed_df = pd.concat([df.A, shares.A, df.B, shares.B], axis=1)
totals = mixed_df.sum(axis=0)
totals.name = 'Total'
mixed_df = mixed_df.append(totals.transpose())
mixed_df.columns = ['A', 'A_pct', 'B', 'B_pct']
print('\nProducing some statistics\n', mixed_df)

one solution would be to use groupby to group consecutive Player scores from the same player
tup = [('Player1', 'A', 1, 100),('Player1', 'B', 15, 100),('Player2', 'A', 7, 100), ('Player2', 'B', 65, 100), ('Global Total', None, 88, 100)]`
then import our groupby
from itertools import groupby
result = dict((name,dict((x[1],x[2:]) for x in values)) for name,values in groupby(tup,lambda x:x[0]))
then just go and update all the totals
for key in result:
if key == "Global Total": continue # skip this one ...
# sum up our player scores
result[key]['total'] = [sum(col) for col in zip(*result[key].values())]
# you can print the results too
print result
# {'Player2': {'A': (7, 100), 'total': [72, 200], 'B': (65, 100)}, 'Player1': {'A': (1, 100), 'total': [16, 200], 'B': (15, 100)}, 'Global Total': {'total': [88, 100], None: (88, 100)}}
NOTE This solution !REQUIRES! that all of player1's scores are grouped together in your tuple, and all of player2's scores are grouped etc

A) Break your code up into manageable chunks:
from collections import defaultdict
result = defaultdict(dict)
for (cat, sub, num, percent) in input_list:
result[cat][sub] = [num, percent]
Now we have a dict with the player counts, but the only valid percentages are for total and we don't have global counts.
from collections import Counter
def build_global(dct):
keys = Counter()
for key in dct:
if key == "Global Total":
continue
for sub_key in dct[key]:
keys[sub_key] += dct[key][sub_key][0]
for key in keys:
dct["Global Total"][key] = [keys[key], 100]
build_global(result) now yields valid global counts for each event.
Finally:
def calc_percent(dct):
totals = dct["Global Total"]
for key in dct:
local_total = 0
if key == "Global Total":
continue
for sub_key in dct[key]:
local_total += dct[key][sub_key][0]
dct[key][sub_key][1] = (dct[key][sub_key][0]/float(totals[sub_key][0])) * 100
dct[key]['Total'] = [local_total, (local_total/float(dct['Global Total'][None][0])) * 100]
calc_percent(result) goes through and builds the percentages.
result is then:
defaultdict(<type 'dict'>,
{'Player2': {'A': [7, 87.5], 'B': [65, 81.25], 'Total': [72, 81.81818181818183]},
'Player1': {'A': [1, 12.5], 'B': [15, 18.75], 'Total': [16, 18.181818181818183]},
'Global Total': {'A': [8, 100], None: [88, 100], 'B': [80, 100]}})
If you need it exactly as specified, you can delete the None entry in global total and dict(result) to convert the defaultdict into a vanilla dict.

Using a remapping tool from more_itertools in Python 3.6+:
Given
import copy as cp
import collections as ct
import more_itertools as mit
data = [
("Player1", "A", 1, 100),
("Player1", "B", 15, 100),
("Player2", "A", 7, 100),
("Player2", "B", 65, 100),
('Global Total', None, 88, 100)
]
# Discard the last entry
data = data[:-1]
# Key functions
kfunc = lambda tup: tup[0]
vfunc = lambda tup: tup[1:]
rfunc = lambda x: {item[0]: [item[1]] for item in x}
Code
# Step 1
remapped = mit.map_reduce(data, kfunc, vfunc, rfunc)
# Step 2
intermediate = ct.defaultdict(list)
for d in remapped.values():
for k, v in d.items():
intermediate[k].extend(v)
# Step 3
remapped["Global Total"] = {k: [sum(v)] for k, v in intermediate.items()}
final = cp.deepcopy(remapped)
for name, d in remapped.items():
for lbl, v in d.items():
stat = (v[0]/remapped["Global Total"][lbl][0]) * 100
final[name][lbl].append(stat)
Details
Step 1 - build a new dict of remapped groups.
This is done by defining key functions that dictate how to process the keys and values. The reducing function processes the values into sub-dictionaries. See also docs for more details on more_itertools.map_reduce.
>>> remapped
defaultdict(None,
{'Player1': {'A': [1], 'B': [15]},
'Player2': {'A': [7], 'B': [65]}})
Step 2 - build an intermediate dict for lookups
>>> intermediate
defaultdict(list, {'A': [1, 7], 'B': [15, 65]})
Step 3 - build a final dict from the latter dictionaries
>>> final
defaultdict(None,
{'Player1': {'A': [1, 12.5], 'B': [15, 18.75]},
'Player2': {'A': [7, 87.5], 'B': [65, 81.25]},
'Global Total': {'A': [8, 100.0], 'B': [80, 100.0]}})

Related

Find All Permutations of a List of Dicts

So I have a list of dicts containing letters and their frequencies.
letter_freq = [
{'a': 10, 'b': 7},
{'d': 15, 'g': 8},
{'a': 12, 'q': 2}
]
I want to find all possible combinations of these dictionaries, as well as the total of their values:
perms = {
'ada': 37, 'adq': 27, 'aga': 30, 'agq': 20, 'bda': 34, 'bdq': 24, 'bga': 27, 'bgq': 17
}
I've looked at itertools.product(), but I don't see how to apply that to this specific use case. My intuition is that the easiest way to implement this is to make a recursive function, but I'm struggling to see how to add the values and the strings for the keys and make it all work.
Also, this list and the dicts can be of any length. Is there a simple way to do this that I haven't found yet? Thank you!
Solutions and Benchmark:
Yes, itertools.product works:
from itertools import product
perms = {
''.join(keys): sum(vals)
for prod in product(*map(dict.items, letter_freq))
for keys, vals in [zip(*prod)]
}
Alternatively, build the products for the keys and the values separately, so we don't have to separate them:
perms = {
''.join(keys): sum(vals)
for keys, vals in zip(product(*letter_freq),
product(*map(dict.values, letter_freq)))
}
Or fully separate their constructions (my favorite one):
keys = map(''.join, product(*letter_freq))
vals = map(sum, product(*map(dict.values, letter_freq)))
perms = dict(zip(keys, vals))
Benchmark would be interesting, I suspect my last one will be fastest of these and also faster than Samwise's.
Yet another, inspired by a glance at constantstranger's (but much faster than theirs in some initial benchmark):
items = [('', 0)]
for d in letter_freq:
items = [(k0+k, v0+v)
for k, v in d.items()
for k0, v0 in items]
perms = dict(items)
Benchmark:
With your example list of dicts:
6.6 μs perms1
4.5 μs perms2
4.1 μs perms3
4.0 μs perms4
11.0 μs perms_Samwise
12.7 μs perms_constantstranger
With a list of seven dicts with four items each:
15.5 ms perms1
7.6 ms perms2
5.5 ms perms3
4.8 ms perms4
27.2 ms perms_Samwise
42.2 ms perms_constantstranger
Code (Try it online!):
def perms1(letter_freq):
return {
''.join(keys): sum(vals)
for prod in product(*map(dict.items, letter_freq))
for keys, vals in [zip(*prod)]
}
def perms2(letter_freq):
return {
''.join(keys): sum(vals)
for keys, vals in zip(product(*letter_freq),
product(*map(dict.values, letter_freq)))
}
def perms3(letter_freq):
keys = map(''.join, product(*letter_freq))
vals = map(sum, product(*map(dict.values, letter_freq)))
return dict(zip(keys, vals))
def perms4(letter_freq):
items = [('', 0)]
for d in letter_freq:
items = [(k0+k, v0+v)
for k, v in d.items()
for k0, v0 in items]
return dict(items)
def perms_Samwise(letter_freq):
return {''.join(k for k, _ in p): sum(v for _, v in p) for p in itertools.product(*(d.items() for d in letter_freq))}
def perms_constantstranger(letter_freq):
stack = [['', 0]]
[stack.append((stack[i][0] + k, stack[i][1] + v)) for row in letter_freq if (lenStack := len(stack)) for k, v in row.items() for i in range(lenStack)]
return dict(row for row in stack if len(row[0]) == len(letter_freq))
funcs = perms1, perms2, perms3, perms4, perms_Samwise, perms_constantstranger
letter_freq = [
{'a': 10, 'b': 7, 'c': 5, 'd': 2},
{'d': 15, 'g': 8, 'j': 6, 'm': 3},
{'a': 12, 'q': 2, 'x': 1, 'z': 4},
{'a': 10, 'b': 7, 'c': 5, 'd': 2},
{'d': 15, 'g': 8, 'j': 6, 'm': 3},
{'a': 12, 'q': 2, 'x': 1, 'z': 4},
{'a': 10, 'b': 7, 'c': 5, 'd': 2},
]
from timeit import repeat
import itertools
from itertools import product
expect = funcs[0](letter_freq)
for func in funcs:
result = func(letter_freq)
assert result == expect
for _ in range(3):
for func in funcs:
t = min(repeat(lambda: func(letter_freq), number=1))
print('%5.1f ms ' % (t * 1e3), func.__name__)
print()
itertools.product is indeed what you want.
>>> letter_freq = [
... {'a': 10, 'b': 7},
... {'d': 15, 'g': 8},
... {'a': 12, 'q': 2}
... ]
>>> import itertools
>>> {''.join(k for k, _ in p): sum(v for _, v in p) for p in itertools.product(*(d.items() for d in letter_freq))}
{'ada': 37, 'adq': 27, 'aga': 30, 'agq': 20, 'bda': 34, 'bdq': 24, 'bga': 27, 'bgq': 17}
If for any reason you wanted to roll your own permutations using comprehensions instead of product() and map(), you could do it this way:
letter_freq = [
{'a': 10, 'b': 7},
{'d': 15, 'g': 8},
{'a': 12, 'q': 2}
]
stack = [['', 0]]
[stack.append((stack[i][0] + k, stack[i][1] + v)) for row in letter_freq if (lenStack := len(stack)) for k, v in row.items() for i in range(lenStack)]
perms = dict(row for row in stack if len(row[0]) == len(letter_freq))
print(perms)
Output:
{'ada': 37, 'bda': 34, 'aga': 30, 'bga': 27, 'adq': 27, 'bdq': 24, 'agq': 20, 'bgq': 17}

Python - group/merge dictionaries based on key/values identity

I have a list containing many dictionaries with same keys but different values.
What I would like to do is to group/merge dictionaries based on the values of some of the keys.
It's probably faster to show an example rather than trying to explain:
[{'zone': 'A', 'weekday': 1, 'hour': 12, 'C1': 3, 'C2': 15},
{'zone': 'B', 'weekday': 2, 'hour': 6, 'C1': 5, 'C2': 27},
{'zone': 'A', 'weekday': 1, 'hour': 12, 'C1': 7, 'C2': 12},
{'zone': 'C', 'weekday': 5, 'hour': 8, 'C1': 2, 'C2': 13}]
So, what I want to achieve is merging the first and third dictionary, since they have the same "zone", "hour" and "weekday", summing the values in C1 and C2:
[{'zone': 'A', 'weekday': 1, 'hour': 12, 'C1': 10, 'C2': 27},
{'zone': 'B', 'weekday': 2, 'hour': 6, 'C1': 5, 'C2': 27},
{'zone': 'C', 'weekday': 5, 'hour': 8, 'C1': 2, 'C2': 13}]
Any help here? :) I've been struggling with this for a couple of days, I've got a bad unscalable solution, but I'm sure there is something far more pythonic that I could put in place.
Thanks!
Sort then group by the relevant keys; iterate over the groups and create new dictionaries with summed values.
import operator
import itertools
keys = operator.itemgetter('zone','weekday','hour')
c1_c2 = operator.itemgetter('C1','C2')
# data is your list of dicts
data.sort(key=keys)
grouped = itertools.groupby(data,keys)
new_data = []
for (zone,weekday,hour),g in grouped:
c1,c2 = 0,0
for d in g:
c1 += d['C1']
c2 += d['C2']
new_data.append({'zone':zone,'weekday':weekday,
'hour':hour,'C1':c1,'C2':c2})
That last loop could also be written as:
for (zone,weekday,hour),g in grouped:
cees = map(c1_c2,g)
c1,c2 = map(sum,zip(*cees))
new_data.append({'zone':zone,'weekday':weekday,
'hour':hour,'C1':c1,'C2':c2})
By using a defaultdict you can merge them in linear time.
from collections import defaultdict
res = defaultdict(lambda : defaultdict(int))
for d in dictionaries:
res[(d['zone'],d['weekday'],d['hour'])]['C1']+= d['C1']
res[(d['zone'],d['weekday'],d['hour'])]['C2']+= d['C2']
The drawback is that you need another pass to have the output as you've defined it.
I've gone ahead and written a slightly longer solution, making use of nametuples as keys of the dictionary:
from collections import namedtuple
zones = [{'zone': 'A', 'weekday': 1, 'hour': 12, 'C1': 3, 'C2': 15},
{'zone': 'B', 'weekday': 2, 'hour': 6, 'C1': 5, 'C2': 27},
{'zone': 'A', 'weekday': 1, 'hour': 12, 'C1': 7, 'C2': 12},
{'zone': 'C', 'weekday': 5, 'hour': 8, 'C1': 2, 'C2': 13}]
ZoneTime = namedtuple("ZoneTime", ["zone", "weekday", "hour"])
results = dict()
for zone in zones:
zone_time = ZoneTime(zone['zone'], zone['weekday'], zone['hour'])
if zone_time in results:
results[zone_time]['C1'] += zone['C1']
results[zone_time]['C2'] += zone['C2']
else:
results[zone_time] = {'C1': zone['C1'], 'C2': zone['C2']}
print(results)
This uses a namedtuple of (zone, weekday, hour) as the key to each dictionary. Then it's fairly trivial to either add to it if it already exists within results, or create a new entry in the dictionary.
You can definitely make this shorter and "smarter", but it may become less readable.
Edit: Run Time Comparison
My original answer (see below) was not a good one, but I think I had a useful contribution by doing a little bit of run time analysis on the other answers so I've edited that portion and put it at the top. Here I include the three other solutions, along with the required transformations to produce the desired output. For completeness I also include a version using pandas, which assumes that the user is working with a DataFrame (transforming from list of dicts to data frame and back was not even close to worth it). Comparison times vary a little depending on the random data generated, but these are fairly representative:
>>> run_timer(100)
Times with 100 values
...with defaultdict: 0.1496697600000516
...with namedtuple: 0.14976404899994122
...with groupby: 0.0690777249999428
...with pandas: 3.3165711250001095
>>> run_timer(1000)
Times with 1000 values
...with defaultdict: 1.267153091999944
...with namedtuple: 0.9605341750000207
...with groupby: 0.6634409229998255
...with pandas: 3.5146895360001054
>>> run_timer(10000)
Times with 10000 values
...with defaultdict: 9.194478484000001
...with namedtuple: 9.157486462000179
...with groupby: 5.18553969300001
...with pandas: 4.704001281000046
>>> run_timer(100000)
Times with 100000 values
...with defaultdict: 59.644778522000024
...with namedtuple: 89.26688319799996
...with groupby: 93.3517027989999
...with pandas: 14.495209061999958
Take aways:
working with pandas data frames pays off big time for large datasets
NOTE: I do not include conversion between list of dicts and data frame, which is definitely significant
otherwise the accepted solution (by wwii) wins for small to medium datasets, but for very large ones it may be the slowest
changing the sizes of the groups (e.g., by decreasing the number of zones) has a huge effect which is not examined here
Here is the script I used to generate the above.
import random
import pandas
from timeit import timeit
from functools import partial
from itertools import groupby
from operator import itemgetter
from collections import namedtuple, defaultdict
def with_pandas(df):
return df.groupby(['zone', 'weekday', 'hour']).agg(sum).reset_index()
def with_groupby(data):
keys = itemgetter('zone', 'weekday', 'hour')
# data is your list of dicts
data.sort(key=keys)
grouped = groupby(data, keys)
new_data = []
for (zone, weekday, hour), g in grouped:
c1, c2 = 0, 0
for d in g:
c1 += d['C1']
c2 += d['C2']
new_data.append({'zone': zone, 'weekday': weekday,
'hour': hour, 'C1': c1, 'C2': c2})
return new_data
def with_namedtuple(zones):
ZoneTime = namedtuple("ZoneTime", ["zone", "weekday", "hour"])
results = dict()
for zone in zones:
zone_time = ZoneTime(zone['zone'], zone['weekday'], zone['hour'])
if zone_time in results:
results[zone_time]['C1'] += zone['C1']
results[zone_time]['C2'] += zone['C2']
else:
results[zone_time] = {'C1': zone['C1'], 'C2': zone['C2']}
return [
{
'zone': key[0],
'weekday': key[1],
'hour': key[2],
**val
}
for key, val in results.items()
]
def with_defaultdict(dictionaries):
res = defaultdict(lambda: defaultdict(int))
for d in dictionaries:
res[(d['zone'], d['weekday'], d['hour'])]['C1'] += d['C1']
res[(d['zone'], d['weekday'], d['hour'])]['C2'] += d['C2']
return [
{
'zone': key[0],
'weekday': key[1],
'hour': key[2],
**val
}
for key, val in res.items()
]
def gen_random_vals(num):
return [
{
'zone': random.choice('ABCDEFGHIJKLMNOPQRSTUVWXYZ'),
'weekday': random.randint(1, 7),
'hour': random.randint(0, 23),
'C1': random.randint(1, 50),
'C2': random.randint(1, 50),
}
for idx in range(num)
]
def run_timer(num_vals=1000, timeit_num=1000):
vals = gen_random_vals(num_vals)
df = pandas.DataFrame(vals)
p_fmt = "\t...with %s: %s"
times = {
'defaultdict': timeit(stmt=partial(with_defaultdict, vals), number=timeit_num),
'namedtuple': timeit(stmt=partial(with_namedtuple, vals), number=timeit_num),
'groupby': timeit(stmt=partial(with_groupby, vals), number=timeit_num),
'pandas': timeit(stmt=partial(with_pandas, df), number=timeit_num),
}
print("Times with %d values" % num_vals)
for key, val in times.items():
print(p_fmt % (key, val))
where
with_groupby uses the solution by wwii
with_namedtuple uses the solution by Jose Salvatierra
with_defaultdict uses the solution by abc
with_pandas uses the solution proposed by Alexander Cécile in comments
assumes data is already in a DataFrame and produces a DataFrame as result
Original answer:
Just for fun, here's a completely different approach using groupby. Granted, it's not the prettiest, but it should be fairly quick.
from itertools import groupby
from operator import itemgetter
from pprint import pprint
vals = [
{'zone': 'A', 'weekday': 1, 'hour': 12, 'C1': 3, 'C2': 15},
{'zone': 'B', 'weekday': 2, 'hour': 6, 'C1': 5, 'C2': 27},
{'zone': 'A', 'weekday': 1, 'hour': 12, 'C1': 7, 'C2': 12},
{'zone': 'C', 'weekday': 5, 'hour': 8, 'C1': 2, 'C2': 13}
]
ordered = sorted(
[
(
(row['zone'], row['weekday'], row['hour']),
row['C1'], row['C2']
)
for row in vals
]
)
def invert_columns(grp):
return zip(*[g_row[1:] for g_row in grp])
merged = [
{
'zone': key[0],
'weekday': key[1],
'hour': key[2],
**dict(
zip(["C1", "C2"], [sum(col) for col in invert_columns(grp)])
)
}
for key, grp in groupby(ordered, itemgetter(0))
]
pprint(merged)
which yields
[{'C1': 10, 'C2': 27, 'hour': 12, 'weekday': 1, 'zone': 'A'},
{'C1': 5, 'C2': 27, 'hour': 6, 'weekday': 2, 'zone': 'B'},
{'C1': 2, 'C2': 13, 'hour': 8, 'weekday': 5, 'zone': 'C'}]

How to sum all the values that belong to the same key?

I'm pulling data from the database and assuming i have something like this:
Product Name Quantity
a 3
a 5
b 2
c 7
I want to sum the Quantity based on Product name, so this is what i want:
product = {'a':8, 'b':2, 'c':7 }
Here's what I'm trying to do after fetching the data from the database:
for row in result:
product[row['product_name']] += row['quantity']
but this will give me: 'a'=5 only, not 8.
Option 1: pandas
This is one way, assuming you begin with a pandas dataframe df. This solution has O(n log n) complexity.
product = df.groupby('Product Name')['Quantity'].sum().to_dict()
# {'a': 8, 'b': 2, 'c': 7}
The idea is you can perform a groupby operation, which produces a series indexed by "Product Name". Then use the to_dict() method to convert to a dictionary.
Option 2: collections.Counter
If you begin with a list or iterator of results, and wish to use a for loop, you can use collections.Counter for O(n) complexity.
from collections import Counter
result = [['a', 3],
['a', 5],
['b', 2],
['c', 7]]
product = Counter()
for row in result:
product[row[0]] += row[1]
print(product)
# Counter({'a': 8, 'c': 7, 'b': 2})
Option 3: itertools.groupby
You can also use a dictionary comprehension with itertools.groupby. This requires sorting beforehand.
from itertools import groupby
res = {i: sum(list(zip(*j))[1]) for i, j in groupby(sorted(result), key=lambda x: x[0])}
# {'a': 8, 'b': 2, 'c': 7}
If you insist on using loops, you can do this:
# fake data to make the script runnable
result = [
{'product_name': 'a', 'quantity': 3},
{'product_name': 'a', 'quantity': 5},
{'product_name': 'b', 'quantity': 2},
{'product_name': 'c', 'quantity': 7}
]
# solution with defaultdict and loops
from collections import defaultdict
d = defaultdict(int)
for row in result:
d[row['product_name']] += row['quantity']
print(dict(d))
The output:
{'a': 8, 'b': 2, 'c': 7}
Since you mention pandas
df.set_index('ProductName').Quantity.sum(level=0).to_dict()
Out[20]: {'a': 8, 'b': 2, 'c': 7}
Use tuple to store the result.
Edit:
Not clear if the data mentioned is really a dataframe.
If yes then li = [tuple(x) for x in df.to_records(index=False)]
li = [('a', 3), ('a', 5), ('b', 2), ('c', 7)]
d = dict()
for key, val in li:
val_old = 0
if key in d:
val_old = d[key]
d[key] = val + val_old
print(d)
Output
{'a': 8, 'b': 2, 'c': 7}

Convert a Pandas DataFrame to a dictionary

I have a DataFrame with four columns. I want to convert this DataFrame to a python dictionary. I want the elements of first column be keys and the elements of other columns in same row be values.
DataFrame:
ID A B C
0 p 1 3 2
1 q 4 3 2
2 r 4 0 9
Output should be like this:
Dictionary:
{'p': [1,3,2], 'q': [4,3,2], 'r': [4,0,9]}
The to_dict() method sets the column names as dictionary keys so you'll need to reshape your DataFrame slightly. Setting the 'ID' column as the index and then transposing the DataFrame is one way to achieve this.
to_dict() also accepts an 'orient' argument which you'll need in order to output a list of values for each column. Otherwise, a dictionary of the form {index: value} will be returned for each column.
These steps can be done with the following line:
>>> df.set_index('ID').T.to_dict('list')
{'p': [1, 3, 2], 'q': [4, 3, 2], 'r': [4, 0, 9]}
In case a different dictionary format is needed, here are examples of the possible orient arguments. Consider the following simple DataFrame:
>>> df = pd.DataFrame({'a': ['red', 'yellow', 'blue'], 'b': [0.5, 0.25, 0.125]})
>>> df
a b
0 red 0.500
1 yellow 0.250
2 blue 0.125
Then the options are as follows.
dict - the default: column names are keys, values are dictionaries of index:data pairs
>>> df.to_dict('dict')
{'a': {0: 'red', 1: 'yellow', 2: 'blue'},
'b': {0: 0.5, 1: 0.25, 2: 0.125}}
list - keys are column names, values are lists of column data
>>> df.to_dict('list')
{'a': ['red', 'yellow', 'blue'],
'b': [0.5, 0.25, 0.125]}
series - like 'list', but values are Series
>>> df.to_dict('series')
{'a': 0 red
1 yellow
2 blue
Name: a, dtype: object,
'b': 0 0.500
1 0.250
2 0.125
Name: b, dtype: float64}
split - splits columns/data/index as keys with values being column names, data values by row and index labels respectively
>>> df.to_dict('split')
{'columns': ['a', 'b'],
'data': [['red', 0.5], ['yellow', 0.25], ['blue', 0.125]],
'index': [0, 1, 2]}
records - each row becomes a dictionary where key is column name and value is the data in the cell
>>> df.to_dict('records')
[{'a': 'red', 'b': 0.5},
{'a': 'yellow', 'b': 0.25},
{'a': 'blue', 'b': 0.125}]
index - like 'records', but a dictionary of dictionaries with keys as index labels (rather than a list)
>>> df.to_dict('index')
{0: {'a': 'red', 'b': 0.5},
1: {'a': 'yellow', 'b': 0.25},
2: {'a': 'blue', 'b': 0.125}}
Should a dictionary like:
{'red': '0.500', 'yellow': '0.250', 'blue': '0.125'}
be required out of a dataframe like:
a b
0 red 0.500
1 yellow 0.250
2 blue 0.125
simplest way would be to do:
dict(df.values)
working snippet below:
import pandas as pd
df = pd.DataFrame({'a': ['red', 'yellow', 'blue'], 'b': [0.5, 0.25, 0.125]})
dict(df.values)
Follow these steps:
Suppose your dataframe is as follows:
>>> df
A B C ID
0 1 3 2 p
1 4 3 2 q
2 4 0 9 r
1. Use set_index to set ID columns as the dataframe index.
df.set_index("ID", drop=True, inplace=True)
2. Use the orient=index parameter to have the index as dictionary keys.
dictionary = df.to_dict(orient="index")
The results will be as follows:
>>> dictionary
{'q': {'A': 4, 'B': 3, 'D': 2}, 'p': {'A': 1, 'B': 3, 'D': 2}, 'r': {'A': 4, 'B': 0, 'D': 9}}
3. If you need to have each sample as a list run the following code. Determine the column order
column_order= ["A", "B", "C"] # Determine your preferred order of columns
d = {} # Initialize the new dictionary as an empty dictionary
for k in dictionary:
d[k] = [dictionary[k][column_name] for column_name in column_order]
Try to use Zip
df = pd.read_csv("file")
d= dict([(i,[a,b,c ]) for i, a,b,c in zip(df.ID, df.A,df.B,df.C)])
print d
Output:
{'p': [1, 3, 2], 'q': [4, 3, 2], 'r': [4, 0, 9]}
If you don't mind the dictionary values being tuples, you can use itertuples:
>>> {x[0]: x[1:] for x in df.itertuples(index=False)}
{'p': (1, 3, 2), 'q': (4, 3, 2), 'r': (4, 0, 9)}
For my use (node names with xy positions) I found #user4179775's answer to the most helpful / intuitive:
import pandas as pd
df = pd.read_csv('glycolysis_nodes_xy.tsv', sep='\t')
df.head()
nodes x y
0 c00033 146 958
1 c00031 601 195
...
xy_dict_list=dict([(i,[a,b]) for i, a,b in zip(df.nodes, df.x,df.y)])
xy_dict_list
{'c00022': [483, 868],
'c00024': [146, 868],
... }
xy_dict_tuples=dict([(i,(a,b)) for i, a,b in zip(df.nodes, df.x,df.y)])
xy_dict_tuples
{'c00022': (483, 868),
'c00024': (146, 868),
... }
Addendum
I later returned to this issue, for other, but related, work. Here is an approach that more closely mirrors the [excellent] accepted answer.
node_df = pd.read_csv('node_prop-glycolysis_tca-from_pg.tsv', sep='\t')
node_df.head()
node kegg_id kegg_cid name wt vis
0 22 22 c00022 pyruvate 1 1
1 24 24 c00024 acetyl-CoA 1 1
...
Convert Pandas dataframe to a [list], {dict}, {dict of {dict}}, ...
Per accepted answer:
node_df.set_index('kegg_cid').T.to_dict('list')
{'c00022': [22, 22, 'pyruvate', 1, 1],
'c00024': [24, 24, 'acetyl-CoA', 1, 1],
... }
node_df.set_index('kegg_cid').T.to_dict('dict')
{'c00022': {'kegg_id': 22, 'name': 'pyruvate', 'node': 22, 'vis': 1, 'wt': 1},
'c00024': {'kegg_id': 24, 'name': 'acetyl-CoA', 'node': 24, 'vis': 1, 'wt': 1},
... }
In my case, I wanted to do the same thing but with selected columns from the Pandas dataframe, so I needed to slice the columns. There are two approaches.
Directly:
(see: Convert pandas to dictionary defining the columns used fo the key values)
node_df.set_index('kegg_cid')[['name', 'wt', 'vis']].T.to_dict('dict')
{'c00022': {'name': 'pyruvate', 'vis': 1, 'wt': 1},
'c00024': {'name': 'acetyl-CoA', 'vis': 1, 'wt': 1},
... }
"Indirectly:" first, slice the desired columns/data from the Pandas dataframe (again, two approaches),
node_df_sliced = node_df[['kegg_cid', 'name', 'wt', 'vis']]
or
node_df_sliced2 = node_df.loc[:, ['kegg_cid', 'name', 'wt', 'vis']]
that can then can be used to create a dictionary of dictionaries
node_df_sliced.set_index('kegg_cid').T.to_dict('dict')
{'c00022': {'name': 'pyruvate', 'vis': 1, 'wt': 1},
'c00024': {'name': 'acetyl-CoA', 'vis': 1, 'wt': 1},
... }
Most of the answers do not deal with the situation where ID can exist multiple times in the dataframe. In case ID can be duplicated in the Dataframe df you want to use a list to store the values (a.k.a a list of lists), grouped by ID:
{k: [g['A'].tolist(), g['B'].tolist(), g['C'].tolist()] for k,g in df.groupby('ID')}
Dictionary comprehension & iterrows() method could also be used to get the desired output.
result = {row.ID: [row.A, row.B, row.C] for (index, row) in df.iterrows()}
df = pd.DataFrame([['p',1,3,2], ['q',4,3,2], ['r',4,0,9]], columns=['ID','A','B','C'])
my_dict = {k:list(v) for k,v in zip(df['ID'], df.drop(columns='ID').values)}
print(my_dict)
with output
{'p': [1, 3, 2], 'q': [4, 3, 2], 'r': [4, 0, 9]}
With this method, columns of dataframe will be the keys and series of dataframe will be the values.`
data_dict = dict()
for col in dataframe.columns:
data_dict[col] = dataframe[col].values.tolist()
DataFrame.to_dict() converts DataFrame to dictionary.
Example
>>> df = pd.DataFrame(
{'col1': [1, 2], 'col2': [0.5, 0.75]}, index=['a', 'b'])
>>> df
col1 col2
a 1 0.1
b 2 0.2
>>> df.to_dict()
{'col1': {'a': 1, 'b': 2}, 'col2': {'a': 0.5, 'b': 0.75}}
See this Documentation for details

Getting keys with max value from dictionary with more than one keys with same value

How to get keys with maximum values when there are more than one keys with same value.
Example: d = 'a': 1, 'c': 4, 'b': 99, 'e': 4, 'f': 99}, I need to return 'b','f'
>>> d = {'a': 1, 'c': 4, 'b': 99, 'e': 4, 'f': 99}
>>> maxval = max(d.values())
>>> [k for k in d if d[k]==maxval]
['b', 'f']
Damn :P Got beaten by a minute. Cheers m8.
maxValue = max(d.values())
print [key for key in d.keys() if d[key]==maxValue]

Categories