Python list of dictionaries: Get last item of each day - python

I have a list of dictionaries, ordered by the key date:
d = [{'date': datetime.strptime('2016-01-01 07:00', "%Y-%m-%d %H:%M"), 'val': 1},
{'date': datetime.strptime('2016-01-01 23:00', "%Y-%m-%d %H:%M"), 'val': 3},
{'date': datetime.strptime('2016-01-02 07:00', "%Y-%m-%d %H:%M"), 'val': 5},
{'date': datetime.strptime('2016-01-02 22:13', "%Y-%m-%d %H:%M"), 'val': 7},
{'date': datetime.strptime('2016-01-02 23:00', "%Y-%m-%d %H:%M"), 'val': 9},
{'date': datetime.strptime('2016-01-03 00:10', "%Y-%m-%d %H:%M"), 'val': 17},
{'date': datetime.strptime('2016-01-03 09:12', "%Y-%m-%d %H:%M"), 'val': 25},
{'date': datetime.strptime('2016-01-03 21:52', "%Y-%m-%d %H:%M"), 'val': 37}]
And i want to get the last(latest) item of each day, so in this case it would be:
{'date': datetime.strptime('2016-01-01 23:00', "%Y-%m-%d %H:%M"), 'val': 3},
{'date': datetime.strptime('2016-01-02 23:00', "%Y-%m-%d %H:%M"), 'val': 9},
{'date': datetime.strptime('2016-01-03 21:52', "%Y-%m-%d %H:%M"), 'val': 37},
I have the following piece of code which does the trick:
previous_item = None
wanted_data = []
for index, entry in enumerate(d):
if not previous_item:
previous_item = entry
continue
if entry['date'].date() != previous_item['date'].date():
wanted_data.append(previous_item)
previous_item = entry
#Add as well the last item
if index + 1 == len(d):
wanted_data.append(entry)
But i believe there are better and faster ways to do it... Besides, thats pretty ugly.
Is there a more pythonish way to achieve this?
Thanks!

Assuming that the data is already sorted by 'date' (it seems to be in your case), you can use itertools.groupby to group by the date(), and then get the last item from each group.
>>> d = sorted(d, key=lambda x: x["date"]) # only if not already sorted
>>> groups = itertools.groupby(d, lambda x: x["date"].date())
>>> wanted_data = [list(grp)[-1] for key, grp in groups]
>>> wanted_data
[{'date': datetime.datetime(2016, 1, 1, 23, 0), 'val': 3},
{'date': datetime.datetime(2016, 1, 2, 23, 0), 'val': 9},
{'date': datetime.datetime(2016, 1, 3, 21, 52), 'val': 37}]
Note that this will expand each of the groups into a list. If this is too expensive, because there are very many entries for each date, you could create a function to get the last entry from an iterator, e.g. using reduce (or functools.reduce in Python 3):
>>> last = lambda x: functools.reduce(lambda x, y: y, x)
>>> wanted_data = [last(grp) for key, grp in groups]

Related

How to convert key to value in dictionary type?

I have a question about the convert key.
First, I have this type of word count in Data Frame.
[Example]
dict = {'forest': 10, 'station': 3, 'office': 7, 'park': 2}
I want to get this result.
[Result]
result = {'name': 'forest', 'value': 10,
'name': 'station', 'value': 3,
'name': 'office', 'value': 7,
'name': 'park', 'value': 2}
Please check this issue.
As Rakesh said:
dict cannot have duplicate keys
The closest way to achieve what you want is to build something like that
my_dict = {'forest': 10, 'station': 3, 'office': 7, 'park': 2}
result = list(map(lambda x: {'name': x[0], 'value': x[1]}, my_dict.items()))
You will get
result = [
{'name': 'forest', 'value': 10},
{'name': 'station', 'value': 3},
{'name': 'office', 'value': 7},
{'name': 'park', 'value': 2},
]
As Rakesh said, You can't have duplicate values in the dictionary
You can simply try this.
dict = {'forest': 10, 'station': 3, 'office': 7, 'park': 2}
result = {}
count = 0;
for key in dict:
result[count] = {'name':key, 'value': dict[key]}
count = count + 1;
print(result)

Python - group/merge dictionaries based on key/values identity

I have a list containing many dictionaries with same keys but different values.
What I would like to do is to group/merge dictionaries based on the values of some of the keys.
It's probably faster to show an example rather than trying to explain:
[{'zone': 'A', 'weekday': 1, 'hour': 12, 'C1': 3, 'C2': 15},
{'zone': 'B', 'weekday': 2, 'hour': 6, 'C1': 5, 'C2': 27},
{'zone': 'A', 'weekday': 1, 'hour': 12, 'C1': 7, 'C2': 12},
{'zone': 'C', 'weekday': 5, 'hour': 8, 'C1': 2, 'C2': 13}]
So, what I want to achieve is merging the first and third dictionary, since they have the same "zone", "hour" and "weekday", summing the values in C1 and C2:
[{'zone': 'A', 'weekday': 1, 'hour': 12, 'C1': 10, 'C2': 27},
{'zone': 'B', 'weekday': 2, 'hour': 6, 'C1': 5, 'C2': 27},
{'zone': 'C', 'weekday': 5, 'hour': 8, 'C1': 2, 'C2': 13}]
Any help here? :) I've been struggling with this for a couple of days, I've got a bad unscalable solution, but I'm sure there is something far more pythonic that I could put in place.
Thanks!
Sort then group by the relevant keys; iterate over the groups and create new dictionaries with summed values.
import operator
import itertools
keys = operator.itemgetter('zone','weekday','hour')
c1_c2 = operator.itemgetter('C1','C2')
# data is your list of dicts
data.sort(key=keys)
grouped = itertools.groupby(data,keys)
new_data = []
for (zone,weekday,hour),g in grouped:
c1,c2 = 0,0
for d in g:
c1 += d['C1']
c2 += d['C2']
new_data.append({'zone':zone,'weekday':weekday,
'hour':hour,'C1':c1,'C2':c2})
That last loop could also be written as:
for (zone,weekday,hour),g in grouped:
cees = map(c1_c2,g)
c1,c2 = map(sum,zip(*cees))
new_data.append({'zone':zone,'weekday':weekday,
'hour':hour,'C1':c1,'C2':c2})
By using a defaultdict you can merge them in linear time.
from collections import defaultdict
res = defaultdict(lambda : defaultdict(int))
for d in dictionaries:
res[(d['zone'],d['weekday'],d['hour'])]['C1']+= d['C1']
res[(d['zone'],d['weekday'],d['hour'])]['C2']+= d['C2']
The drawback is that you need another pass to have the output as you've defined it.
I've gone ahead and written a slightly longer solution, making use of nametuples as keys of the dictionary:
from collections import namedtuple
zones = [{'zone': 'A', 'weekday': 1, 'hour': 12, 'C1': 3, 'C2': 15},
{'zone': 'B', 'weekday': 2, 'hour': 6, 'C1': 5, 'C2': 27},
{'zone': 'A', 'weekday': 1, 'hour': 12, 'C1': 7, 'C2': 12},
{'zone': 'C', 'weekday': 5, 'hour': 8, 'C1': 2, 'C2': 13}]
ZoneTime = namedtuple("ZoneTime", ["zone", "weekday", "hour"])
results = dict()
for zone in zones:
zone_time = ZoneTime(zone['zone'], zone['weekday'], zone['hour'])
if zone_time in results:
results[zone_time]['C1'] += zone['C1']
results[zone_time]['C2'] += zone['C2']
else:
results[zone_time] = {'C1': zone['C1'], 'C2': zone['C2']}
print(results)
This uses a namedtuple of (zone, weekday, hour) as the key to each dictionary. Then it's fairly trivial to either add to it if it already exists within results, or create a new entry in the dictionary.
You can definitely make this shorter and "smarter", but it may become less readable.
Edit: Run Time Comparison
My original answer (see below) was not a good one, but I think I had a useful contribution by doing a little bit of run time analysis on the other answers so I've edited that portion and put it at the top. Here I include the three other solutions, along with the required transformations to produce the desired output. For completeness I also include a version using pandas, which assumes that the user is working with a DataFrame (transforming from list of dicts to data frame and back was not even close to worth it). Comparison times vary a little depending on the random data generated, but these are fairly representative:
>>> run_timer(100)
Times with 100 values
...with defaultdict: 0.1496697600000516
...with namedtuple: 0.14976404899994122
...with groupby: 0.0690777249999428
...with pandas: 3.3165711250001095
>>> run_timer(1000)
Times with 1000 values
...with defaultdict: 1.267153091999944
...with namedtuple: 0.9605341750000207
...with groupby: 0.6634409229998255
...with pandas: 3.5146895360001054
>>> run_timer(10000)
Times with 10000 values
...with defaultdict: 9.194478484000001
...with namedtuple: 9.157486462000179
...with groupby: 5.18553969300001
...with pandas: 4.704001281000046
>>> run_timer(100000)
Times with 100000 values
...with defaultdict: 59.644778522000024
...with namedtuple: 89.26688319799996
...with groupby: 93.3517027989999
...with pandas: 14.495209061999958
Take aways:
working with pandas data frames pays off big time for large datasets
NOTE: I do not include conversion between list of dicts and data frame, which is definitely significant
otherwise the accepted solution (by wwii) wins for small to medium datasets, but for very large ones it may be the slowest
changing the sizes of the groups (e.g., by decreasing the number of zones) has a huge effect which is not examined here
Here is the script I used to generate the above.
import random
import pandas
from timeit import timeit
from functools import partial
from itertools import groupby
from operator import itemgetter
from collections import namedtuple, defaultdict
def with_pandas(df):
return df.groupby(['zone', 'weekday', 'hour']).agg(sum).reset_index()
def with_groupby(data):
keys = itemgetter('zone', 'weekday', 'hour')
# data is your list of dicts
data.sort(key=keys)
grouped = groupby(data, keys)
new_data = []
for (zone, weekday, hour), g in grouped:
c1, c2 = 0, 0
for d in g:
c1 += d['C1']
c2 += d['C2']
new_data.append({'zone': zone, 'weekday': weekday,
'hour': hour, 'C1': c1, 'C2': c2})
return new_data
def with_namedtuple(zones):
ZoneTime = namedtuple("ZoneTime", ["zone", "weekday", "hour"])
results = dict()
for zone in zones:
zone_time = ZoneTime(zone['zone'], zone['weekday'], zone['hour'])
if zone_time in results:
results[zone_time]['C1'] += zone['C1']
results[zone_time]['C2'] += zone['C2']
else:
results[zone_time] = {'C1': zone['C1'], 'C2': zone['C2']}
return [
{
'zone': key[0],
'weekday': key[1],
'hour': key[2],
**val
}
for key, val in results.items()
]
def with_defaultdict(dictionaries):
res = defaultdict(lambda: defaultdict(int))
for d in dictionaries:
res[(d['zone'], d['weekday'], d['hour'])]['C1'] += d['C1']
res[(d['zone'], d['weekday'], d['hour'])]['C2'] += d['C2']
return [
{
'zone': key[0],
'weekday': key[1],
'hour': key[2],
**val
}
for key, val in res.items()
]
def gen_random_vals(num):
return [
{
'zone': random.choice('ABCDEFGHIJKLMNOPQRSTUVWXYZ'),
'weekday': random.randint(1, 7),
'hour': random.randint(0, 23),
'C1': random.randint(1, 50),
'C2': random.randint(1, 50),
}
for idx in range(num)
]
def run_timer(num_vals=1000, timeit_num=1000):
vals = gen_random_vals(num_vals)
df = pandas.DataFrame(vals)
p_fmt = "\t...with %s: %s"
times = {
'defaultdict': timeit(stmt=partial(with_defaultdict, vals), number=timeit_num),
'namedtuple': timeit(stmt=partial(with_namedtuple, vals), number=timeit_num),
'groupby': timeit(stmt=partial(with_groupby, vals), number=timeit_num),
'pandas': timeit(stmt=partial(with_pandas, df), number=timeit_num),
}
print("Times with %d values" % num_vals)
for key, val in times.items():
print(p_fmt % (key, val))
where
with_groupby uses the solution by wwii
with_namedtuple uses the solution by Jose Salvatierra
with_defaultdict uses the solution by abc
with_pandas uses the solution proposed by Alexander Cécile in comments
assumes data is already in a DataFrame and produces a DataFrame as result
Original answer:
Just for fun, here's a completely different approach using groupby. Granted, it's not the prettiest, but it should be fairly quick.
from itertools import groupby
from operator import itemgetter
from pprint import pprint
vals = [
{'zone': 'A', 'weekday': 1, 'hour': 12, 'C1': 3, 'C2': 15},
{'zone': 'B', 'weekday': 2, 'hour': 6, 'C1': 5, 'C2': 27},
{'zone': 'A', 'weekday': 1, 'hour': 12, 'C1': 7, 'C2': 12},
{'zone': 'C', 'weekday': 5, 'hour': 8, 'C1': 2, 'C2': 13}
]
ordered = sorted(
[
(
(row['zone'], row['weekday'], row['hour']),
row['C1'], row['C2']
)
for row in vals
]
)
def invert_columns(grp):
return zip(*[g_row[1:] for g_row in grp])
merged = [
{
'zone': key[0],
'weekday': key[1],
'hour': key[2],
**dict(
zip(["C1", "C2"], [sum(col) for col in invert_columns(grp)])
)
}
for key, grp in groupby(ordered, itemgetter(0))
]
pprint(merged)
which yields
[{'C1': 10, 'C2': 27, 'hour': 12, 'weekday': 1, 'zone': 'A'},
{'C1': 5, 'C2': 27, 'hour': 6, 'weekday': 2, 'zone': 'B'},
{'C1': 2, 'C2': 13, 'hour': 8, 'weekday': 5, 'zone': 'C'}]

Parsing a fixed-width file in Python with Big Decimals

I have to parse the following file in python:
20100322;232400;1.355800;1.355900;1.355800;1.355900;0
20100322;232500;1.355800;1.355900;1.355800;1.355900;0
20100322;232600;1.355800;1.355800;1.355800;1.355800;0
I need to end upwith the following variables (first line is parsed as example):
year = 2010
month = 03
day = 22
hour = 23
minute = 24
p1 = Decimal('1.355800')
p2 = Decimal('1.355900')
p3 = Decimal('1.355800')
p4 = Decimal('1.355900')
I have tried:
line = '20100322;232400;1.355800;1.355900;1.355800;1.355900;0'
year = line[:4]
month = line[4:6]
day = line[6:8]
hour = line[9:11]
minute = line[11:13]
p1 = Decimal(line[16:24])
p2 = Decimal(line[25:33])
p3 = Decimal(line[34:42])
p4 = Decimal(line[43:51])
print(year)
print(month)
print(day)
print(hour)
print(minute)
print(p1)
print(p2)
print(p3)
print(p4)
Which works fine, but I am wondering if there is an easier way to parse this (maybe using struct) to avoid having to count each position manually.
from decimal import Decimal
from datetime import datetime
line = "20100322;232400;1.355800;1.355900;1.355800;1.355900;0"
tokens = line.split(";")
dt = datetime.strptime(tokens[0] + tokens[1], "%Y%m%d%H%M%S")
decimals = [Decimal(string) for string in tokens[2:6]]
# datetime objects also have some useful attributes: dt.year, dt.month, etc.
print(dt, *decimals, sep="\n")
Output:
2010-03-22 23:24:00
1.355800
1.355900
1.355800
1.355900
You could use regex:
import re
to_parse = """
20100322;232400;1.355800;1.355900;1.355800;1.355900;0
20100322;232500;1.355800;1.355900;1.355800;1.355900;0
20100322;232600;1.355800;1.355800;1.355800;1.355800;0
"""
stx = re.compile(
r'(?P<date>(?P<year>\d{4})(?P<month>\d{2})(?P<day>\d{2}));'
r'(?P<time>(?P<hour>\d{2})(?P<minute>\d{2})(?P<second>\d{2}));'
r'(?P<p1>[\.\-\d]*);(?P<p2>[\.\-\d]*);(?P<p3>[\.\-\d]*);(?P<p4>[\.\-\d]*)'
)
f = [{k:float(v) if 'p' in k else int(v) for k,v in a.groupdict().items()} for a in stx.finditer(to_parse)]
print(f)
Output:
[{'date': 20100322,
'day': 22,
'hour': 23,
'minute': 24,
'month': 3,
'p1': 1.3558,
'p2': 1.3559,
'p3': 1.3558,
'p4': 1.3559,
'second': 0,
'time': 232400,
'year': 2010},
{'date': 20100322,
'day': 22,
'hour': 23,
'minute': 25,
'month': 3,
'p1': 1.3558,
'p2': 1.3559,
'p3': 1.3558,
'p4': 1.3559,
'second': 0,
'time': 232500,
'year': 2010},
{'date': 20100322,
'day': 22,
'hour': 23,
'minute': 26,
'month': 3,
'p1': 1.3558,
'p2': 1.3558,
'p3': 1.3558,
'p4': 1.3558,
'second': 0,
'time': 232600,
'year': 2010}]
Here i stored everything in a list, but you could actually go through the results of finditer line by line if you don't want to store everything in memory.
You can also replace fload and/or int with Decimal if needed

Grouping python list of dictionaries and aggregation value data

I have input list
inlist = [{"id":123,"hour":5,"groups":"1"},{"id":345,"hour":3,"groups":"1;2"},{"id":65,"hour":-2,"groups":"3"}]
I need to group the dictionaries by 'groups' value. After that I need to add key min and max of hour in new grouped lists. The output should look like this
outlist=[(1, [{"id":123, "hour":5, "min_group_hour":3, "max_group_hour":5}, {"id":345, "hour":3, "min_group_hour":3, "max_group_hour":5}]),
(2, [{"id":345, "hour":3, "min_group_hour":3, "max_group_hour":3}])
(3, [{"id":65, "hour":-2, "min_group_hour":-2, "max_group_hour":-2}])]
So far I managed to group input list
new_list = []
for domain in test:
for group in domain['groups'].split(';'):
d = dict()
d['id'] = domain['id']
d['group'] = group
d['hour'] = domain['hour']
new_list.append(d)
for k,v in itertools.groupby(new_list, key=itemgetter('group')):
print (int(k),max(list(v),key=itemgetter('hour'))
And output is
('1', [{'group': '1', 'id': 123, 'hour': 5}])
('2', [{'group': '2', 'id': 345, 'hour': 3}])
('3', [{'group': '3', 'id': 65, 'hour': -2}])
I don't know how to aggregate values by group? And is there more pythonic way of grouping dictionaries by key value that needs to be splitted?
Start by creating a dict that maps group numbers to dictionaries:
from collections import defaultdict
dicts_by_group = defaultdict(list)
for dic in inlist:
groups = map(int, dic['groups'].split(';'))
for group in groups:
dicts_by_group[group].append(dic)
This gives us a dict that looks like
{1: [{'id': 123, 'hour': 5, 'groups': '1'},
{'id': 345, 'hour': 3, 'groups': '1;2'}],
2: [{'id': 345, 'hour': 3, 'groups': '1;2'}],
3: [{'id': 65, 'hour': -2, 'groups': '3'}]}
Then iterate over the grouped dicts and set the min_group_hour and max_group_hour for each group:
outlist = []
for group in sorted(dicts_by_group.keys()):
dicts = dicts_by_group[group]
min_hour = min(dic['hour'] for dic in dicts)
max_hour = max(dic['hour'] for dic in dicts)
dicts = [{'id': dic['id'], 'hour': dic['hour'], 'min_group_hour': min_hour,
'max_group_hour': max_hour} for dic in dicts]
outlist.append((group, dicts))
Result:
[(1, [{'id': 123, 'hour': 5, 'min_group_hour': 3, 'max_group_hour': 5},
{'id': 345, 'hour': 3, 'min_group_hour': 3, 'max_group_hour': 5}]),
(2, [{'id': 345, 'hour': 3, 'min_group_hour': 3, 'max_group_hour': 3}]),
(3, [{'id': 65, 'hour': -2, 'min_group_hour': -2, 'max_group_hour': -2}])]
IIUC: Here is another way to do it in pandas:
import pandas as pd
input = [{"id":123,"hour":5,"group":"1"},{"id":345,"hour":3,"group":"1;2"},{"id":65,"hour":-2,"group":"3"}]
df = pd.DataFrame(input)
#Get minimum
dfmi = df.groupby('group').apply(min)
#Rename hour column as min_hour
dfmi.rename(columns={'hour':'min_hour'}, inplace=True)
dfmx = df.groupby('group').apply(max)
#Rename hour column as max_hour
dfmx.rename(columns={'hour':'max_hour'}, inplace=True)
#Merge min df with main df
df = df.merge(dfmi, on='group', how='outer')
#Merge max df with main df
df = df.merge(dfmx, on='group', how='outer')
output = list(df.apply(lambda x: x.to_dict(), axis=1))
#Dictionary of dictionaries
dict_out = df.to_dict(orient='index')

Django query grouping result

OK, so I have the following model:
Transaction
currency_code: CharField
date_time: DateTimeField
price: IntegerField
I'd like to make a query which returns a dictionary of distinct days, with totals for all transactions under each currency_code. So something like:
{
'2012/05/01': {
'USD': 5000,
'EUR': 3500,
}
'2012/05/02': {
'USD' ...
So far I've got this query, but I'm kind of stuck:
Transaction.objects.extra({'date' : 'date(date_time)'}).values('date', 'currency_code').annotate(Sum('price'))
This gets me a result which looks like the following:
[
{'date': datetime.date(2012, 5, 1), 'price__sum': 5000, 'currency_code': 'USD'}
{'date': datetime.date(2012, 5, 1), 'price__sum': 3500, 'currency_code': 'EUR'}
...
]
Any advice on how I can get my query to group by date? Thanks in advance!
Here's a non-ORM approach to get things done:
>>> from collections import defaultdict
>>> d = defaultdict(dict)
>>> l = [{'a': 1, 'price': 50, 'currency': 'USD'},{'a': 1, 'price': 55, 'currency': 'USD'}, {'a': 1, 'price': 0, 'currency': 'USD'},{'a':1, 'price': 20, 'currency': 'EUR'}]
>>> for i in l:
... if i['currency'] not in d[i['a']].keys():
... d[i['a']][i['currency']] = i['price']
... else:
... d[i['a']][i['currency']] += i['price']
...
>>> d
defaultdict(<type 'dict'>, {1: {'USD': 105, 'EUR': 20}})
Here is the groupby version:
>>> l2 = []
>>> for i,g in groupby(l, lambda x: x['currency']):
... price = 0.0
... for p in g:
... price += p['price']
... l2.append({'a': p['a'], 'total' : price, 'currency': i})
...
>>> l2
[{'a': 1, 'currency': 'USD', 'total': 105.0}, {'a': 1, 'currency': 'EUR', 'total': 20.0}, {'a': 2, 'currency': 'KWD', 'total': 2.0}, {'a': 2, 'currency': 'GBP', 'total': 21.0}]

Categories