groupby column if value is less than some value - python

I have a dataframe like
df = pd.DataFrame({'time': [1, 5, 100, 250, 253, 260, 700], 'qty': [3, 6, 2, 5, 64, 2, 5]})
df['time_delta'] = df.time.diff()
and I would like to groupby time_delta such that all rows where the time_delta is less than 10 are grouped together, time_delta column could be dropped, and qty is summed.
The expected result is
pd.DataFrame({'time': [1, 100, 250, 700], 'qty': [9, 2, 71, 5]})
Basically I am hoping there is something like a df.groupby(time_delta_func(10)).agg({'time': 'min', 'qty': 'sum'}) func. I read up on pd.Grouper but it seems like the grouping based on time is very strict and interval based.

you can do it with gt meaning greater than and cumsum to create a new group each time the time-delta is greater than 10
res = (
df.groupby(df['time_delta'].gt(10).cumsum(), as_index=False)
.agg({'time':'first','qty':sum})
)
print(res)
time qty
0 1 9
1 100 2
2 250 71
3 700 5

Related

How to find the overlapping count of rows between two keys of a multindex dataframe?

Two dataframes have been concatenated with different keys (multiindex dataframe) with same index. Dates are the index. There are different products in each dataframe as column names and their prices. I basically had to find the correlation between these two dataframes and overlapping period count. Correlation is done but how to find the count of overlapping rows with each product from each dataframe and produce result as a dataframe with products from dataframe 1 as column name and products from dataframe2 as row names and values as the number of overlapping rows for the same period. It should be a matrix.
For example: Dataframe1:
df1 = pd.DataFrame(data = {'col1' : [1/12/2020, 2/12/2020, 3/12/2020,],
'col2' : [10, 11, 12], 'col3' :[13, 14, 10]})
df2 = pd.DataFrame(data = {'col1' : [1/12/2020, 2/12/2020, 3/12/2020,],
'A' : [10, 9, 12], 'B' :[4, 14, 2]})
df1=df1.set_index('col1')
df2=df2.set_index('col1')
concat_data1 = pd.concat([df1, df2], axis=1, keys=['df1', 'df2'])
concat_data1
df1 df2
col2 col3 A B
col1
1/12/2020 10 13 10 4
2/12/2020 11 14 9 14
3/12/2020 12 10 12 2
Need output result as: Overlapping period=
col2 col3
A 2 0
B 0 1
This is a way of doing it:
import itertools
import pandas as pd
data1 = {
'col1': ['1/12/2020', '2/12/2020', '3/12/2020', '4/12/2020'],
'col2': [10, 11, 12, 14],
'col3': [13, 14, 10, 6],
'col4': [10, 9, 15, 10],
'col5': [10, 9, 15, 5],
}
data2 = {
'col1': ['1/12/2020', '2/12/2020', '3/12/2020', '4/12/2020'],
'A': [10, 9, 12, 14],
'B' :[4, 14, 2, 9],
'C': [6, 9, 1, 3],
'D': [6, 9, 1, 8]
}
df1 = pd.DataFrame(data1).set_index('col1')
df2 = pd.DataFrame(data2).set_index('col1')
concat_data = pd.concat([df1, df2], axis=1, keys=['df1', 'df2'])
columns = {df: list(concat_data[df].columns) for df in set(concat_data.columns.get_level_values(0))}
matrix = pd.DataFrame(data=0, columns=columns['df1'], index=columns['df2'])
for row in concat_data.iterrows():
for cols in list(itertools.product(columns['df1'], columns['df2'])):
matrix.loc[cols[1], cols[0]] += row[1]['df1'][cols[0]] == row[1]['df2'][cols[1]]
print(matrix)

Get index of rows after groupby and nlargest

I have a large dataframe where I want to use groupby and nlargest to look for the second largest, third, fourth and fifth largest value of each group. I have over 500 groups and each group has over 1000 values. I also have other columns in the dataframe which I want to keep after applying groupby and nlargest. My dataframe looks like this
df = pd.DataFrame({
'group': [1,2,3,3,4, 5,6,7,7,8],
'a': [4, 5, 3, 1, 2, 20, 10, 40, 50, 30],
'b': [20, 10, 40, 50, 30, 4, 5, 3, 1, 2],
'c': [25, 20, 5, 15, 10, 25, 20, 5, 15, 10]
})
To look for second, third, fourth largest and so on of each group for column a I use
secondlargest = df.groupby(['group'], as_index=False)['a'].apply(lambda grp: grp.nlargest(2).min())
which returns
group a
0 1 4
1 2 5
2 3 1
3 4 2
4 5 20
5 6 10
6 7 40
7 8 30
I need columns b and c present in this resulting dataframe. I use the following to subset the original dataframe but it returns an empty dataframe. How should I modify the code?
secondsubset = df[df.groupby(['group'])['a'].apply(lambda grp: grp.nlargest(2).min())]
If I understand your goal correctly, you should be able to just drop as_index=False, use idxmin instead of min, pass the result to df.loc:
df.loc[df.groupby('group')['a'].apply(lambda grp: grp.nlargest(2).idxmin())]
You can uses agg lambda. It is neater
df.groupby('group').agg(lambda grp: grp.nlargest(2).min())

Why is this Groupby transform not working?

For a dummy dataset, which each id corresponds to one match:
df2 = pd.DataFrame(columns=['id', 'score', 'duration', 'user'],
data=[[1, 800, 60, 'abc'], [1, 900, 60, 'zxc'], [2, 800, 250, 'abc'], [2, 5000, 250, 'bvc'],
[3, 6000, 250, 'zxc'], [3, 8000, 250, 'klp'], [4, 1400, 500,'kod'],
[4, 8000, 500, 'bvc']])
If I want to keep only the records where either one of the same id have duration greater than 120 and score greater than 1500, this works fine:
cond = df2['duration'].gt(120) & df2['score'].gt(1500)
out = df2[cond.groupby(df2['id']).transform('all')]
and returns 2 instances of the same id. However, if I want to keep only the pairs of id's where the user is 'abc' it does not work. I have tried:
out = df2[(df2['user'].eq('abc')).groupby(df2['id']).transform('all')]
out = df2[(df2['user'] == 'abc').groupby(df2['id']).transform('all')]
and they both return blank df's. How to solve this problem? The outcome should be any match that user 'abc' played in.
From the comments, you want 'any', not 'all':
out = df2[(df2['user'] == 'abc').groupby(df2['id']).transform('any')]

Pandas: Get percentile value by specific rows

i try to get the percentile of the value in column value, based on min and max column
import pandas as pd
d = {'value': [20, 10, -5, ],
'min': [0, 10, -10,],
'max': [40, 20, 0]}
df = pd.DataFrame(data=d)
df
I obtain a new column "percentile", which looks like this:
d = {'value': [20, 10, -5, ],
'min': [0, 10, -10,],
'max': [40, 20, 0],
'percentile':[50, 0, 50]}
df_res = pd.DataFrame(data=d)
df_res
Thanks
You need to offset the min so that it becomes a ratio between value and max. You can do :
df['percentile'] = (df['value'] - df['min']) / (df['max'] - df['min']) * 100

Rolling sum for a window of 2 days

I am trying to compute a rolling 2 day using trans_date sum against the amount column that is grouped by ID within the table below using python.
<table><tbody><tr><th>ID</th><th>Trans_Date</th><th>Trans_Time</th><th>Amount</th><th> </th></tr><tr><td>1</td><td>03/23/2019</td><td>06:51:03</td><td>100</td><td> </td></tr><tr><td>1</td><td>03/24/2019</td><td>12:32:48</td><td>600</td><td> </td></tr><tr><td>1</td><td>03/24/2019</td><td>14:15:35</td><td>50</td><td> </td></tr><tr><td>1</td><td>06/05/2019</td><td>16:18:21</td><td>75</td><td> </td></tr><tr><td>2</td><td>02/01/2019</td><td>18:02:52</td><td>200</td><td> </td></tr><tr><td>2</td><td>02/02/2019</td><td>10:03:02</td><td>150</td><td> </td></tr><tr><td>2</td><td>02/03/2019</td><td>23:47:51</td><td>800</td><td> </td></tr><tr><td>3</td><td>01/18/2019</td><td>11:12:58</td><td>1000</td><td> </td></tr><tr><td>3</td><td>01/23/2019</td><td>22:12:41</td><td>15</td><td> </td></tr></tbody></table>
Ultimately, I am trying to achieve the result below using
<table><tbody><tr><th>ID</th><th>Trans_Date</th><th>Trans_Time</th><th>Amount</th><th>2d_Running_Total</th><th> </th></tr><tr><td>1</td><td>03/23/2019</td><td>06:51:03</td><td>100</td><td>100</td><td> </td></tr><tr><td>1</td><td>03/24/2019</td><td>12:32:48</td><td>600</td><td>700</td><td> </td></tr><tr><td>1</td><td>03/24/2019</td><td>14:15:35</td><td>250</td><td>950</td><td> </td></tr><tr><td>1</td><td>06/05/2019</td><td>16:18:21</td><td>75</td><td>75</td><td> </td></tr><tr><td>2</td><td>02/01/2019</td><td>18:02:52</td><td>200</td><td>200</td><td> </td></tr><tr><td>2</td><td>02/02/2019</td><td>10:03:02</td><td>150</td><td>350</td><td> </td></tr><tr><td>2</td><td>02/03/2019</td><td>23:47:51</td><td>800</td><td>950</td><td> </td></tr><tr><td>3</td><td>01/18/2019</td><td>11:12:58</td><td>1000</td><td>1000</td><td> </td></tr><tr><td>3</td><td>01/23/2019</td><td>22:12:41</td><td>15</td><td>15</td><td> </td></tr></tbody></table>
This hyperlink was very close to solving this, but the issue is for the records that have multiple transactions on the same day, it provides the same value for the same day.
https://python-forum.io/Thread-Rolling-sum-for-a-window-of-2-days-Pandas
This should do it:
import pandas as pd
# create dummy data
df = pd.DataFrame(
columns = ['ID', 'Trans_Date', 'Trans_Time', 'Amount'],
data = [
[1, '03/23/2019', '06:51:03', 100],
[1, '03/24/2019', '12:32:48', 600],
[1, '03/24/2019', '14:15:35', 250],
[1, '06/05/2019', '16:18:21', 75],
[2, '02/01/2019', '18:02:52', 200],
[2, '02/02/2019', '10:03:02', 150],
[2, '02/03/2019', '23:47:51', 800],
[3, '01/18/2019', '11:12:58', 1000],
[3, '01/23/2019', '22:12:41', 15]
]
)
df_out = pd.DataFrame(
columns = ['ID', 'Trans_Date', 'Trans_Time', 'Amount', '2d_Running_Total'],
data = [
[1, '03/23/2019', '06:51:03', 100, 100],
[1, '03/24/2019', '12:32:48', 600, 700],
[1, '03/24/2019', '14:15:35', 250, 950],
[1, '06/05/2019', '16:18:21', 75, 75],
[2, '02/01/2019', '18:02:52', 200, 200],
[2, '02/02/2019', '10:03:02', 150, 350],
[2, '02/03/2019', '23:47:51', 800, 950],
[3, '01/18/2019', '11:12:58', 1000, 1000]
]
)
# convert into datetime object and set as index
df['Trans_DateTime'] = pd.to_datetime(df['Trans_Date'] + ' ' + df['Trans_Time'])
df = df.set_index('Trans_DateTime')
# group by ID and apply rolling window to the amount column
df['2d_Running_Total'] = df.groupby('ID')['Amount'].rolling('2d').sum().values.astype(int)
df.reset_index(drop=True)

Categories