Sum a Pandas DataFrame column under the ranges of another DataFrame - python

I have two DataFrames DF1 and DF2, and I want to aggregate the values of one column in DF1 under the date ranges of a column in DF2. Here is my reproducible example:
DF1 ranges from 6/14/2013 to 7/13/2013, and is sorted descending in time. Its columns to be aggregated are a and b. Notice, there can be multiple records for the same date.
list1 = [{'a': 5, 'date': '7/13/2013', 'b': 13},
{'a': 4, 'date': '7/12/2013', 'b': 14},
{'a': 7, 'date': '7/12/2013', 'b': 12},
{'a': 2, 'date': '7/10/2013', 'b': 18},
{'a': 9, 'date': '7/7/2013', 'b': 17},
{'a': 6, 'date': '7/5/2013', 'b': 20},
{'a': 8, 'date': '6/30/2013', 'b': 12},
{'a': 5, 'date': '6/29/2013', 'b': 13},
{'a': 3, 'date': '6/25/2013', 'b': 13},
{'a': 4, 'date': '6/23/2013', 'b': 10},
{'a': 1, 'date': '6/22/2013', 'b': 16},
{'a': 6, 'date': '6/20/2013', 'b': 19},
{'a': 7, 'date': '6/18/2013', 'b': 12},
{'a': 9, 'date': '6/16/2013', 'b': 15}]
DF1 = pd.DataFrame(list1)
DF2 contains the weekly date separators, for which the DF1 columns a and b should be aggregated.
list2 = [{'datesep': '6/22/2013', 'c': 32},
{'datesep': '6/29/2013', 'c': 23},
{'datesep': '7/6/2013', 'c': 44},
{'datesep': '7/13/2013', 'c': 18},
{'datesep': '7/20/2013', 'c': 51}]
DF2 = pd.DataFrame(list2)
What I want to do is keep DF1.c as is, and aggregate DF1.a and DF1.b so that the values get summed at the DF2.datesep separator just above their DF1.date. That is, the values of DF1.a and DF1.b from 6/16/2013 to 6/22/2013 (both inclusive) should be aggregated at the closest next date separator, which is DF2.datesep=6/22/2013 row. 7/7/2013 to 7/13/2013 (both inclusive) should be aggregated at the closest next date separator, which is DF2.datesep=7/13/2013 row etc. Therefore the result should look like (column orders don't matter):
c date a_sum b_sum
0 32 6/22/2013 23 62
1 23 6/29/2013 12 36
2 44 7/6/2013 14 32
3 18 7/13/2013 27 74
4 51 7/20/2013 - -
I did this with a loop on list1 and list2, but is there a Pandas/Numpy solution that utilizes DF1 and DF2? Thank you!

First you need to convert the date strings to actual date. Then you can use a lambda to calculate a_sum and b_sum for each row. Finally combine the sum df to DF2:
DF1.date = pd.to_datetime(DF1.date)
DF2['end'] = pd.to_datetime(DF2.datesep)
DF2['start'] = DF2.end.shift(1).fillna(pd.to_datetime('1970-01-01'))
sums = DF2.apply(lambda x: DF1.loc[DF1.date.gt(x.start) & DF1.date.le(x.end)][['a','b']].sum(), axis=1)
sums.columns=['a_sum','b_sum']
pd.concat([DF2[['c','datesep']],sums],1)
c datesep a_sum b_sum
0 32 6/22/2013 23 62
1 23 6/29/2013 12 36
2 44 7/6/2013 14 32
3 18 7/13/2013 27 74
4 51 7/20/2013 0 0

Related

How can I get the sum of Counter objects from a rolling group by in Pandas?

I'm trying to sum Counter objects in a group by rolling window in Pandas but I'm running into errors.
I need to get the sum of the rolling next 6 (or any other number) of days (from the date in the given row) of Counter objects in a new column. This needs to be grouped by the id.
"next 6" clarification: This is NOT including current day. So for Jan 1, 2021, the next six days will be Jan 2, Jan 3, Jan 4, Jan 5, Jan 6, Jan 7
Code to construct minimal example:
import numpy as np
import pandas as pd
from collections import Counter
ids = [111, 111, 111, 111, 111, 111, 222, 222]
cntr = [Counter({'a': 1, 'b':2}), Counter({'c': 3}), Counter({'a':1, 'b':1}), Counter({'a': 1, 'b':2}), Counter({'c': 3}), Counter({'a':1, 'b':1}), Counter({'d':2}), Counter({'e': 3})]
dates = pd.date_range(start='1/1/2018', end='1/04/2018').append(pd.date_range(start='1/08/2018', end='1/11/2018'))
df = pd.DataFrame({'id': ids, 'dates': dates, 'state_cntr': cntr})
The sample dataframe looks like:
id dates state_cntr
0 111 2018-01-01 {'a': 1, 'b': 2}
1 111 2018-01-02 {'c': 3}
2 111 2018-01-03 {'a': 1, 'b': 1}
3 111 2018-01-04 {'a': 1, 'b': 2}
4 111 2018-01-08 {'c': 3}
5 111 2018-01-09 {'a': 1, 'b': 1}
6 222 2018-01-10 {'d': 2}
7 222 2018-01-11 {'e': 3}
Output Required
id dates state_cntr output_needed
0 111 2018-01-01 {'a': 1, 'b': 2} {'a': 2, 'b': 3, 'c': 3}
1 111 2018-01-02 {'c': 3} {'a': 2, 'b': 3, 'c': 3}
2 111 2018-01-03 {'a': 1, 'b': 1} {'a': 2, 'b': 3, 'c': 3}
3 111 2018-01-04 {'a': 1, 'b': 2} {'a': 1, 'b': 1, 'c': 3}
4 111 2018-01-08 {'c': 3} {'a': 1, 'b': 1}
5 111 2018-01-09 {'a': 1, 'b': 1} {}
6 222 2018-01-10 {'d': 2} {'e': 3}
7 222 2018-01-11 {'e': 3} {}
For me, df.groupby(['id', 'dates'])['state_cntr'].sum() at least sums the Counter objects, although that isn't exactly what I want. But the following gives me an error, because it seems to be trying to ensure that the value is a float (but what I have is a Counter), so, it throws a TypeError:
grouped_df = df.groupby(['id', 'dates'])
grouped_df['state_cntr'].sum().reset_index(level='id').groupby(['id'])['state_cntr'].rolling('6D').sum()
Error: TypeError: float() argument must be a string or a number, not 'Counter' and DataError: No numeric types to aggregate (some other cascading ones)
I would appreciate any help I can get. Thanks.

Pandas: How to group by column values when column values are dicts?

I am doing an exercise in which the current requirement is to "Find the top 10 major project themes (using column 'mjtheme_namecode')".
My first thought was to do group_by, then count and sort the groups.
However, the values in this column are lists of dicts, e.g.
[{'code': '1', 'name': 'Economic management'},
{'code': '6', 'name': 'Social protection and risk management'}]
and I can't (apparently) group these, at least not with group_by. I get an error.
TypeError: unhashable type: 'list'
Is there a trick? I'm guessing something along the lines of this question.
(I can group by another column that has string values and matches 1:1 with this column, but the exercise is specific.)
df.head()
There are two steps to solve your problem:
Using pandas==0.25
Flatten the list of dict
Transform dict in columns:
Step 1
df = df.explode('mjtheme_namecode')
Step 2
df = df.join(pd.DataFrame(df['mjtheme_namecode'].values.tolist())
Added: if the dict has multiple hierarchies, you can try using json_normalize:
from pandas.io.json import json_normalize
df = df.join(json_normalize(df['mjtheme_namecode'].values.tolist())
The only issue here is pd.explode will duplicate all other columns (in case that is an issue).
Using sample data:
x = [
[1,2,[{'a':1, 'b':3},{'a':2, 'b':4}]],
[1,3,[{'a':5, 'b':6},{'a':7, 'b':8}]]
]
df = pd.DataFrame(x, columns=['col1','col2','col3'])
Out[1]:
col1 col2 col3
0 1 2 [{'a': 1, 'b': 3}, {'a': 2, 'b': 4}]
1 1 3 [{'a': 5, 'b': 6}, {'a': 7, 'b': 8}]
## Step 1
df.explode('col3')
Out[2]:
col1 col2 col3
0 1 2 {'a': 1, 'b': 3}
0 1 2 {'a': 2, 'b': 4}
1 1 3 {'a': 5, 'b': 6}
1 1 3 {'a': 7, 'b': 8}
## Step 2
df = df.join(pd.DataFrame(df['col3'].values.tolist()))
Out[3]:
col1 col2 col3 a b
0 1 2 {'a': 1, 'b': 3} 1 3
0 1 2 {'a': 2, 'b': 4} 1 3
1 1 3 {'a': 5, 'b': 6} 2 4
1 1 3 {'a': 7, 'b': 8} 2 4
## Now you can group with the new variables

Keeping additional column when normalizing list of dicts

I have a dataframe containing id and list of dicts:
df = pd.DataFrame({
'list_of_dicts': [[{'a': 1, 'b': 2}, {'a': 11, 'b': 22}],
[{'a': 3, 'b': 4}, {'a': 33, 'b': 44}]],
'id': [100, 200]
})
and I want to normalize it like this:
id a b
0 100 1 2
0 100 3 4
1 200 11 22
1 200 33 44
This gets most of the way:
pd.concat([
pd.DataFrame.from_dict(item)
for item in df.list_of_dicts
])
but is missing the id column.
I'm most interested in readability.
How about something like this:
d = {
'list_of_dicts': [[{'a': 1, 'b': 2}, {'a': 11, 'b': 22}],
[{'a': 3, 'b': 4}, {'a': 33, 'b': 44}]],
'id': [100, 200]
}
df = pd.DataFrame([pd.Series(x) for ld in d['list_of_dicts'] for x in ld])
id = [[x]*len(l) for l,x in zip(d['list_of_dicts'],d['id'])]
df['id'] = pd.Series([x for l in id for x in l])
EDIT - Here's a simpler version
t = [[('id', i)]+list(l.items()) for i in d['id'] for ll in d['list_of_dicts'] for l in ll]
df = pd.DataFrame([dict(x) for x in t])
And, if you really want the id column first, you can change dict to OrderedDict from the collections module.
This is what I call an incomprehension
pd.DataFrame(
*list(map(list, zip(
*[(d, i) for i, l in zip(df.id, df.list_of_dicts) for d in l]
)))
).rename_axis('id').reset_index()
id a b
0 100 1 2
1 100 11 22
2 200 3 4
3 200 33 44

Reorganizing the data in a dataframe

I have data in the following format:
data =
[
{'data1': [{'sub_data1': 0}, {'sub_data2': 4}, {'sub_data3': 1}, {'sub_data4': -5}]},
{'data2': [{'sub_data1': 1}, {'sub_data2': 1}, {'sub_data3': 1}, {'sub_data4': 12}]},
{'data3': [{'sub_data1': 3}, {'sub_data2': 0}, {'sub_data3': 1}, {'sub_data4': 7}]},
]
How should I reorganize it so that when save it to hdf by
a = pd.DataFrame(data, columns=map(lambda x: x.name, ['data1', 'data2', 'data3']))
a.to_hdf('my_data.hdf')
I get a dataframe in the following format:
data1 data2 data3
_________________________________________
sub_data1 0 1 1
sub_data2 4 1 0
sub_data3 1 1 1
sub_data4 -5 12 7
update1: after following advice given me below and saving it an hdf file and reading it, I got this which is not what I want:
data1 data2 data3
0 {u'sub_data1': 22} {u'sub_data1': 33} {u'sub_data1': 44}
1 {u'sub_data2': 0} {u'sub_data2': 11} {u'sub_data2': 44}
2 {u'sub_data3': 12} {u'sub_data3': 16} {u'sub_data3': 19}
3 {u'sub_data4': 0} {u'sub_data4': 0} {u'sub_data4': 0}
Well if you convert your data into dictionary of dictionaries, you can then just create DataFrame very easily:
In [25]: data2 = {k: {m: n for i in v for m, n in i.iteritems()} for x in data for k, v in x.iteritems()}
In [26]: data2
Out[26]:
{'data1': {'sub_data1': 0, 'sub_data2': 4, 'sub_data3': 1, 'sub_data4': -5},
'data2': {'sub_data1': 1, 'sub_data2': 1, 'sub_data3': 1, 'sub_data4': 12},
'data3': {'sub_data1': 3, 'sub_data2': 0, 'sub_data3': 1, 'sub_data4': 7}}
In [27]: pd.DataFrame(data2)
Out[27]:
data1 data2 data3
sub_data1 0 1 3
sub_data2 4 1 0
sub_data3 1 1 1
sub_data4 -5 12 7

Convert a Pandas DataFrame to a dictionary

I have a DataFrame with four columns. I want to convert this DataFrame to a python dictionary. I want the elements of first column be keys and the elements of other columns in same row be values.
DataFrame:
ID A B C
0 p 1 3 2
1 q 4 3 2
2 r 4 0 9
Output should be like this:
Dictionary:
{'p': [1,3,2], 'q': [4,3,2], 'r': [4,0,9]}
The to_dict() method sets the column names as dictionary keys so you'll need to reshape your DataFrame slightly. Setting the 'ID' column as the index and then transposing the DataFrame is one way to achieve this.
to_dict() also accepts an 'orient' argument which you'll need in order to output a list of values for each column. Otherwise, a dictionary of the form {index: value} will be returned for each column.
These steps can be done with the following line:
>>> df.set_index('ID').T.to_dict('list')
{'p': [1, 3, 2], 'q': [4, 3, 2], 'r': [4, 0, 9]}
In case a different dictionary format is needed, here are examples of the possible orient arguments. Consider the following simple DataFrame:
>>> df = pd.DataFrame({'a': ['red', 'yellow', 'blue'], 'b': [0.5, 0.25, 0.125]})
>>> df
a b
0 red 0.500
1 yellow 0.250
2 blue 0.125
Then the options are as follows.
dict - the default: column names are keys, values are dictionaries of index:data pairs
>>> df.to_dict('dict')
{'a': {0: 'red', 1: 'yellow', 2: 'blue'},
'b': {0: 0.5, 1: 0.25, 2: 0.125}}
list - keys are column names, values are lists of column data
>>> df.to_dict('list')
{'a': ['red', 'yellow', 'blue'],
'b': [0.5, 0.25, 0.125]}
series - like 'list', but values are Series
>>> df.to_dict('series')
{'a': 0 red
1 yellow
2 blue
Name: a, dtype: object,
'b': 0 0.500
1 0.250
2 0.125
Name: b, dtype: float64}
split - splits columns/data/index as keys with values being column names, data values by row and index labels respectively
>>> df.to_dict('split')
{'columns': ['a', 'b'],
'data': [['red', 0.5], ['yellow', 0.25], ['blue', 0.125]],
'index': [0, 1, 2]}
records - each row becomes a dictionary where key is column name and value is the data in the cell
>>> df.to_dict('records')
[{'a': 'red', 'b': 0.5},
{'a': 'yellow', 'b': 0.25},
{'a': 'blue', 'b': 0.125}]
index - like 'records', but a dictionary of dictionaries with keys as index labels (rather than a list)
>>> df.to_dict('index')
{0: {'a': 'red', 'b': 0.5},
1: {'a': 'yellow', 'b': 0.25},
2: {'a': 'blue', 'b': 0.125}}
Should a dictionary like:
{'red': '0.500', 'yellow': '0.250', 'blue': '0.125'}
be required out of a dataframe like:
a b
0 red 0.500
1 yellow 0.250
2 blue 0.125
simplest way would be to do:
dict(df.values)
working snippet below:
import pandas as pd
df = pd.DataFrame({'a': ['red', 'yellow', 'blue'], 'b': [0.5, 0.25, 0.125]})
dict(df.values)
Follow these steps:
Suppose your dataframe is as follows:
>>> df
A B C ID
0 1 3 2 p
1 4 3 2 q
2 4 0 9 r
1. Use set_index to set ID columns as the dataframe index.
df.set_index("ID", drop=True, inplace=True)
2. Use the orient=index parameter to have the index as dictionary keys.
dictionary = df.to_dict(orient="index")
The results will be as follows:
>>> dictionary
{'q': {'A': 4, 'B': 3, 'D': 2}, 'p': {'A': 1, 'B': 3, 'D': 2}, 'r': {'A': 4, 'B': 0, 'D': 9}}
3. If you need to have each sample as a list run the following code. Determine the column order
column_order= ["A", "B", "C"] # Determine your preferred order of columns
d = {} # Initialize the new dictionary as an empty dictionary
for k in dictionary:
d[k] = [dictionary[k][column_name] for column_name in column_order]
Try to use Zip
df = pd.read_csv("file")
d= dict([(i,[a,b,c ]) for i, a,b,c in zip(df.ID, df.A,df.B,df.C)])
print d
Output:
{'p': [1, 3, 2], 'q': [4, 3, 2], 'r': [4, 0, 9]}
If you don't mind the dictionary values being tuples, you can use itertuples:
>>> {x[0]: x[1:] for x in df.itertuples(index=False)}
{'p': (1, 3, 2), 'q': (4, 3, 2), 'r': (4, 0, 9)}
For my use (node names with xy positions) I found #user4179775's answer to the most helpful / intuitive:
import pandas as pd
df = pd.read_csv('glycolysis_nodes_xy.tsv', sep='\t')
df.head()
nodes x y
0 c00033 146 958
1 c00031 601 195
...
xy_dict_list=dict([(i,[a,b]) for i, a,b in zip(df.nodes, df.x,df.y)])
xy_dict_list
{'c00022': [483, 868],
'c00024': [146, 868],
... }
xy_dict_tuples=dict([(i,(a,b)) for i, a,b in zip(df.nodes, df.x,df.y)])
xy_dict_tuples
{'c00022': (483, 868),
'c00024': (146, 868),
... }
Addendum
I later returned to this issue, for other, but related, work. Here is an approach that more closely mirrors the [excellent] accepted answer.
node_df = pd.read_csv('node_prop-glycolysis_tca-from_pg.tsv', sep='\t')
node_df.head()
node kegg_id kegg_cid name wt vis
0 22 22 c00022 pyruvate 1 1
1 24 24 c00024 acetyl-CoA 1 1
...
Convert Pandas dataframe to a [list], {dict}, {dict of {dict}}, ...
Per accepted answer:
node_df.set_index('kegg_cid').T.to_dict('list')
{'c00022': [22, 22, 'pyruvate', 1, 1],
'c00024': [24, 24, 'acetyl-CoA', 1, 1],
... }
node_df.set_index('kegg_cid').T.to_dict('dict')
{'c00022': {'kegg_id': 22, 'name': 'pyruvate', 'node': 22, 'vis': 1, 'wt': 1},
'c00024': {'kegg_id': 24, 'name': 'acetyl-CoA', 'node': 24, 'vis': 1, 'wt': 1},
... }
In my case, I wanted to do the same thing but with selected columns from the Pandas dataframe, so I needed to slice the columns. There are two approaches.
Directly:
(see: Convert pandas to dictionary defining the columns used fo the key values)
node_df.set_index('kegg_cid')[['name', 'wt', 'vis']].T.to_dict('dict')
{'c00022': {'name': 'pyruvate', 'vis': 1, 'wt': 1},
'c00024': {'name': 'acetyl-CoA', 'vis': 1, 'wt': 1},
... }
"Indirectly:" first, slice the desired columns/data from the Pandas dataframe (again, two approaches),
node_df_sliced = node_df[['kegg_cid', 'name', 'wt', 'vis']]
or
node_df_sliced2 = node_df.loc[:, ['kegg_cid', 'name', 'wt', 'vis']]
that can then can be used to create a dictionary of dictionaries
node_df_sliced.set_index('kegg_cid').T.to_dict('dict')
{'c00022': {'name': 'pyruvate', 'vis': 1, 'wt': 1},
'c00024': {'name': 'acetyl-CoA', 'vis': 1, 'wt': 1},
... }
Most of the answers do not deal with the situation where ID can exist multiple times in the dataframe. In case ID can be duplicated in the Dataframe df you want to use a list to store the values (a.k.a a list of lists), grouped by ID:
{k: [g['A'].tolist(), g['B'].tolist(), g['C'].tolist()] for k,g in df.groupby('ID')}
Dictionary comprehension & iterrows() method could also be used to get the desired output.
result = {row.ID: [row.A, row.B, row.C] for (index, row) in df.iterrows()}
df = pd.DataFrame([['p',1,3,2], ['q',4,3,2], ['r',4,0,9]], columns=['ID','A','B','C'])
my_dict = {k:list(v) for k,v in zip(df['ID'], df.drop(columns='ID').values)}
print(my_dict)
with output
{'p': [1, 3, 2], 'q': [4, 3, 2], 'r': [4, 0, 9]}
With this method, columns of dataframe will be the keys and series of dataframe will be the values.`
data_dict = dict()
for col in dataframe.columns:
data_dict[col] = dataframe[col].values.tolist()
DataFrame.to_dict() converts DataFrame to dictionary.
Example
>>> df = pd.DataFrame(
{'col1': [1, 2], 'col2': [0.5, 0.75]}, index=['a', 'b'])
>>> df
col1 col2
a 1 0.1
b 2 0.2
>>> df.to_dict()
{'col1': {'a': 1, 'b': 2}, 'col2': {'a': 0.5, 'b': 0.75}}
See this Documentation for details

Categories