python pandas groupby sorting and concatenating - python

I have a panda data frame:
df = pd.DataFrame({'a': [1,1,1,1,2,2,2], 'b': ['a','a','a','a','b','b','b'], 'c': ['o','o','o','o','p','p','p'], 'd': [ [2,3,4], [1,3,3,4], [3,3,1,2], [4,1,2], [8,2,1], [0,9,1,2,3], [4,3,1] ], 'e': [13,12,5,10,3,2,5] })
What I want is:
First group by columns a, b, c --- there are two groups
Then sort within each group according to column e in an ascending order
Lastly concatenate within each group column d
So the result I want is:
result = pd.DataFrame({'a':[1,2], 'b':['a','b'], 'c':['o','p'], 'd':[[3,3,1,2,4,1,2,1,3,3,4,2,3,4],[0,9,1,2,3,8,2,1,4,3,1]]})
Could anyone share some quick/elegant ways to get around this? Thanks very much.

You can sort by column e, group by a, b and c and then use a list comprehension to concatenate the d column (flatten it). Notice that we can use sort and then groupby since groupby will
preserve the order in which observations are sorted within each group:
according to the doc here:
(df.sort_values('e').groupby(['a', 'b', 'c'])['d']
.apply(lambda g: [j for i in g for j in i]).reset_index())
An alternative to list-comprehension is the chain from itertools:
from itertools import chain
(df.sort_values('e').groupby(['a', 'b', 'c'])['d']
.apply(lambda g: list(chain.from_iterable(g))).reset_index())

Related

How to creat key and list of elements from column as values from dataframes

How to create python dictionary using the data below
Df1:
Id
mail-id
1
xyz#gm
1
ygzbb
2.
Ghh.
2.
Hjkk.
I want it as
{1:[xyz#gm,ygzbb], 2:[Ghh,Hjkk]}
Something like this?
data = [
[1, "xyz#gm"],
[1, "ygzbb"],
[2, "Ghh"],
[2, "Hjkk"],
]
dataDict = {}
for k, v in data:
if k not in dataDict:
dataDict[k] = []
dataDict[k].append(v)
print(dataDict)
One option is to groupby the Id column and turn the mail-id into a list in a dictionary comprehension:
{k:v["mail-id"].values.tolist() for k,v in df.groupby("Id")}
One option is to iterate over the set version of the ids and check one by one:
>>> _d = {}
>>> df = pd.DataFrame({"Id":[1,1,2,2],"mail-id":["xyz#gm","ygzbb","Ghh","Hjkk"]})
>>> for x in set(df["Id"]):
... _d.update({x:df[df["id"]==x]["mail_id"]})
But it's much faster to use dictionary comprehension and builtin pandas DataFrame.groupby; a quick look from the Official Documentation:
A groupby operation involves some combination of splitting the object, applying a function, and combining the results. This can be used to group large amounts of data and compute operations on these groups.
DataFrame.groupby(by=None, axis=0, level=None, as_index=True, sort=True, group_keys=True, squeeze=NoDefault.no_default, observed=False, dropna=True)
as #fsimonjetz pointed out, this code will be sufficent:
>>> df = pd.DataFrame({"Id":[1,1,2,2],"mail-id":["xyz#gm","ygzbb","Ghh","Hjkk"]})
>>> {k:v["mail-id"].values.tolist() for k,v in df.groupby("Id")}
You can do:
df.groupby('Id').agg(list).to_dict()['mail-id']
Output:
{1: ['xyz#gm', 'ygzbb'], 2: ['Ghh.', 'Hjkk.']}

Sum multiple row combinations in pandas OR numpy: speeding up a slow nested loop

I have a function that takes a pandas dataframe with index labels in the form of <int>_<int> (which basically denotes some size ranges in µm), and columns that hold values for separate samples in those size ranges.
The size ranges are consecutive as in the following example:
df = pd.DataFrame({'A': ['a', 'd', 'g', 'j'], 'B': ['b', 'e', 'h', 'k'], 'C': ['c', 'f', 'i', 'l']}, index = ['0_10', '10_20', '20_30', '30_40'])
A B C
0_10 a b c
10_20 d e f
20_30 g h i
30_40 j k l
Note: for demonstration purpose the values are letter here. The real values are float64 numbers.
Here is the code that I am using so far. The docstring shows you what it is doing. As such it works fine, however, the nested loop and the iterative creation of new rows makes it very slow. For a dataframe with 200 rows and 21 columns it runs for about 2 min.
def combination_sums(df): # TODO: speed up
"""
Append new rows to a DF, where each new row is a column-wise sum of an original row
and any possible combination of consecutively following rows. The input DF must have
an index according to the scheme below.
Example:
INPUT DF OUTPUT DF
A B C A B C
0_10 a b c 0_10 a b c
10_20 d e f --> 10_20 d e f
20_30 g h i 20_30 g h i
30_40 j k l 30_40 j k l
0_20 a+d b+e c+f
0_30 a+d+g b+e+h c+f+i
0_40 a+d+g+j b+e+h+k c+f+i+l
10_30 d+g e+h f+i
10_40 d+g+j e+h+k f+i+l
20_40 g+j h+k i+l
"""
ol = len(df) # original length
for i in range(ol):
for j in range(i+1,ol):
new_row_name = df.index[i].split('_')[0] + '_' + df.index[j].split('_')[1] # creates a string for the row index from the first and the last rows in the sum
df.loc[new_row_name] = df.iloc[i:j].sum()
return df
I am wondering what could be a better way to make it more efficient. I.e. using intermediate conversion to a numpy array and doing it in a vectorised operation. From somewhat similar posts (e.g. here), I thought there could be a way with numpy mgrid or ogrid, however, it was not similar enough for me to adapt it to what I want to achieve.

Pandas use cell value as dict key to return dict value

my question relates to using the values in a dataframe column as keys in order to return their respective values and run a conditional.
I have a dataframe, df, containing a column "count" that has integers from 1 to 8 and a column "category" that has values either "A", "B", or "C"
I have a dictionary, dct, containing pairs A:2, B:4, C:6
This is my (incorrect) code:
result = df[df["count"] >= dct.get(df["category"])]
So I want to return a dataframe where the "count" value for a given row is equal to more than the value retrieved from a dictionary using the "category" letter in the same row.
So if there were count values of (1, 2, 6, 6) and category values of (A, B, C, A), the third and forth row would be return in the resultant dataframe.
How do I modify the above code to achieve this?
A good way to go is to add your dictionary into a the existing dataframe and then apply a query on the new dataframe:
import pandas as pd
df = pd.DataFrame(data={'count': [4, 5, 6], 'category': ['A', 'B', 'C']})
dct = {'A':5, 'B':4, 'C':-1}
df['min_count'] = df['category'].map(dct)
df = df.query('count>min_count')
following your logic:
import pandas as pd
dct = {'A':2, 'B':4, 'C':6}
df = pd.DataFrame({'count':[1,2,5,6],
'category':['A','B','C','A']})
print('original dataframe')
print(df)
def process_row(x):
return True if x['count'] >= dct[x['category']] else False
f = df.apply(lambda row: process_row(row), axis=1)
df = df[f]
print('final output')
print(df)
output:
original dataframe
count category
0 1 A
1 2 B
2 5 C
3 6 A
final output
count category
3 6 A
A small modification to your code:
result = df[df['count'] >= df['category'].apply(lambda x: dct[x])]
You cannot directly use dct.get(df['category']) because df['category'] returns a mutable Series which cannot be used as a dictionary key (Dictionary keys need to be immutable objects)
So, apply and lambda to the rescue! :)

Count occurence of string in a series type column of data frame in Python

I have a column in data frame which looks like below
How do i calculate frequency of each word. For ex: The word 'doorman' appears in 4 rows so i need the word along with its frequency i.e doorman = 4.
This needs to be done for each and every word.
Please advise
I think you can first flat list of lists in column and then use Counter:
df = pd.DataFrame({'features':[['a','b','b'],['c'],['a','a']]})
print (df)
features
0 [a, b, b]
1 [c]
2 [a, a]
from itertools import chain
from collections import Counter
print (Counter(list(chain.from_iterable(df.features))))
Counter({'a': 3, 'b': 2, 'c': 1})

How to combine two rows in Python list

Suppose I have a 2D list,
a= [['a','b','c',1],
['a','b','d',2],
['a','e','d',3],
['a','e','c',4]]
I want to obtain a list such that if the first two elements in rows are identical, sum the fourth element, drop the third element and combine these rows together, like the following,
b = [['a','b',3],
['a','e',7]]
What is the most efficient way to do this?
If your list is already sorted, then you can use itertools.groupby. Once you group by the first two elements, you can use a generator expression to sum the 4th element and create your new lists.
>>> from itertools import groupby
>>> a= [['a','b','c',1],
['a','b','d',2],
['a','e','d',3],
['a','e','c',4]]
>>> [g[0] + [sum(i[3] for i in g[1])] for g in groupby(a, key = lambda i : i[:2])]
[['a', 'b', 3],
['a', 'e', 7]]
Using pandas's groupby:
import pandas as pd
df = pd.DataFrame(a)
df.groupby([0, 1]).sum().reset_index().values.tolist()
Output:
df.groupby([0, 1]).sum().reset_index().values.tolist()
Out[19]: [['a', 'b', 3L], ['a', 'e', 7L]]
You can use pandas groupby methods to achieve that goal.
import pandas as pd
a= [['a','b','c',1],
['a','b','d',2],
['a','e','d',3],
['a','e','c',4]]
df = pd.DataFrame(a)
df_sum = df.groupby([0,1])[3].sum().reset_index()
array_return = df_sum.values
list_return = array_return.tolist()
print(list_return)
list_reuturn is the result you want.
If you're interested. Here is an implementation using raw python. I've only tested it on the dataset you provided.
a= [['a','b','c',1],
['a','b','d',2],
['a','e','d',3],
['a','e','c',4]]
b_dict = {}
for row in a:
key = (row[0], row[1])
b_dict[key] = b_dict[key] + row[3] if key in b_dict else row[3]
b = [[key[0], key[1], value] for key, value in b_dict.iteritems()]

Categories