Get N Largest Date Pandas - python

I had posted this question and need to expand on the application. I now need to get the N max date for each Vendor:
#import pandas as pd
#df = pd.read_clipboard()
#df['Insert_Date'] = pd.to_datetime(df['Insert_Date'])
# used in example below
#df2 = df.sort_values(['Vendor','InsertDate']).drop_duplicates(['Vendor'],keep='last')
Vendor Insert_Date Total
Steph 2017-10-25 2
Matt 2017-10-31 13
Chris 2017-11-03 3
Steve 2017-10-23 11
Chris 2017-10-27 3
Steve 2017-11-01 11
If I needed to get the 2nd max date expected output would be:
Vendor Insert_Date Total
Steph 2017-10-25 2
Steve 2017-10-23 11
Matt 2017-10-31 13
Chris 2017-10-27 3
I can easily get the 2nd max date by using df2 in the example df.loc[~df.index.isin(df2.index)] but if i need to get the 50th max value, that is a lot of dataframe building to use isin()...
I have also tried df.groupby('Vendor')['Insert_Date'].nlargest(N_HERE) which gets me close, but i then need to get the N value for each Vendor.
I have also tried filtering out the df by Vendor:
df.loc[df['Vendor']=='Chris', 'Insert_Date'].nlargest(2)
but if I try to get the second record with df.loc[df['Vendor']=='Chris', 'Insert_Date'].nlargest(2)[2] it returns: Timestamp('2017-11-03 00:00:00'). Instead i need to use df.loc[df['Vendor']=='Chris', 'Insert_Date'].nlargest(2)[1:2]. Why must I use list slicing here and not simply[2]?
In summary? how do I return the N largest date by Vendor?

I might've misunderstood your initial problem. You can sort on Insert_Date, and then use groupby + apply in this manner:
n = 9
df.sort_values('Insert_Date')\
.groupby('Vendor', as_index=False).apply(lambda x: x.iloc[-n])
For your example data, it seems n = 0 does the trick.
df.sort_values('Insert_Date')\
.groupby('Vendor', as_index=False).apply(lambda x: x.iloc[0])
Vendor Insert_Date Total
0 Chris 2017-10-27 3
1 Matt 2017-10-31 13
2 Steph 2017-10-25 2
3 Steve 2017-10-23 11
Beware, this code will throw errors if your Vendor groups are smaller in size than n.

I will using head (You can pick the top n here I am using 2) and always drop_duplicates by the last.
df.sort_values('Insert_Date',ascending=False).groupby('Vendor').\
head(2).drop_duplicates('Vendor',keep='last').sort_index()
Out[609]:
Vendor Insert_Date Total
0 Steph 2017-10-25 2
1 Matt 2017-10-31 13
3 Steve 2017-10-23 11
4 Chris 2017-10-27 3

I like #COLDSPEED's answer as its more direct. Here is one using nlargest which involves an intermediate step of creating nthlargest column
n = 2
df1['nth_largest'] = df1.groupby('Vendor').Insert_Date.transform(lambda x: x.nlargest(n).min())
df1.drop_duplicates(subset = ['Vendor', 'nth_largest']).drop('Insert_Date', axis = 1)
Vendor Total nth_largest
0 Steph 2 2017-10-25
1 Matt 13 2017-10-31
2 Chris 3 2017-10-27
3 Steve 11 2017-10-23

Related

Python dataframe find closest date for each ID

I have a dataframe like this:
data = {'SalePrice':[10,10,10,20,20,3,3,1,4,8,8],'HandoverDateA':['2022-04-30','2022-04-30','2022-04-30','2022-04-30','2022-04-30','2022-04-30','2022-04-30','2022-04-30','2022-04-30','2022-03-30','2022-03-30'],'ID': ['Tom', 'Tom','Tom','Joseph','Joseph','Ben','Ben','Eden','Tim','Adam','Adam'], 'Tranche': ['Red', 'Red', 'Red', 'Red','Red','Blue','Blue','Red','Red','Red','Red'],'Totals':[100,100,100,50,50,90,90,70,60,70,70],'Sent':['2022-01-18','2022-02-19','2022-03-14','2022-03-14','2022-04-22','2022-03-03','2022-02-07','2022-01-04','2022-01-10','2022-01-15','2022-03-12'],'Amount':[20,10,14,34,15,60,25,10,10,40,20],'Opened':['2021-12-29','2021-12-29','2021-12-29','2022-12-29','2022-12-29','2021-12-19','2021-12-19','2021-12-29','2021-12-29','2021-12-29','2021-12-29']}
I need to find the sent date which is closest to the HandoverDate. I've seen plenty of examples that work when you give one date to search but here the date I want to be closest to can change for every ID. I have tried to adapt the following:
def nearest(items, pivot):
return min([i for i in items if i <= pivot], key=lambda x: abs(x - pivot))
And also tried to write a loop where I make a dataframe for each ID and use max on the date column then stick them together, but it's incredibly slow!
Thanks for any suggestions :)
IIUC, you can use:
data[['HandoverDateA', 'Sent']] = data[['HandoverDateA', 'Sent']].apply(pd.to_datetime)
out = data.loc[data['HandoverDateA']
.sub(data['Sent']).abs()
.groupby(data['ID']).idxmin()]
Output:
SalePrice HandoverDateA ID Tranche Totals Sent Amount Opened
10 8 2022-03-30 Adam Red 70 2022-03-12 20 2021-12-29
5 3 2022-04-30 Ben Blue 90 2022-03-03 60 2021-12-19
7 1 2022-04-30 Eden Red 70 2022-01-04 10 2021-12-29
4 20 2022-04-30 Joseph Red 50 2022-04-22 15 2022-12-29
8 4 2022-04-30 Tim Red 60 2022-01-10 10 2021-12-29
2 10 2022-04-30 Tom Red 100 2022-03-14 14 2021-12-29
Considering that the goal is to find the sent date which is closest to the HandoverDate, one approach would be as follows.
First of all, create the dataframe df from data with pandas.DataFrame
import pandas as pd
df = pd.DataFrame(data)
Then, make sure that the columns HandoverDateA and Sent are of datetime using pandas.to_datetime
df['HandoverDateA'] = pd.to_datetime(df['HandoverDateA'])
df['Sent'] = pd.to_datetime(df['Sent'])
Then, in order to make it more convenient, create a column, diff, to store the absolute value of the difference between the columns HandoverDateA and Sent
df['diff'] = (df['HandoverDateA'] - df['Sent']).dt.days.abs()
With that column, one can simply sort by that column as follows
df = df.sort_values(by=['diff'])
[Out]:
SalePrice HandoverDateA ID ... Amount Opened diff
4 20 2022-04-30 Joseph ... 15 2022-12-29 8
10 8 2022-03-30 Adam ... 20 2021-12-29 18
2 10 2022-04-30 Tom ... 14 2021-12-29 47
5 3 2022-04-30 Ben ... 60 2021-12-19 58
8 4 2022-04-30 Tim ... 10 2021-12-29 110
7 1 2022-04-30 Eden ... 10 2021-12-29 116
and the first row is the one where Sent is closest to HandOverDateA.
With the column diff, one option to get the one where diff is minimum is with pandas.DataFrame.query as follows
df = df.query('diff == diff.min()')
[Out]:
SalePrice HandoverDateA ID Tranche ... Sent Amount Opened diff
4 20 2022-04-30 Joseph Red ... 2022-04-22 15 2022-12-29 8
Notes:
For more information on sorting dataframes by columns, read my answer here.

Python DataFrame: Change status of a row in df based on conditions from another df?

I have two df's, One with students details and another df with students attendance records.
details_df
name roll start_day last_day
0 anthony 9 2020-09-08 2020-09-28
1 paul 6 2020-09-01 2020-09-15
2 marcus 10 2020-08-08 2020-09-08
attendance_df
name roll status day
0 anthony 9 absent 2020-07-25
1 anthony 9 present 2020-09-15
2 anthony 9 absent 2020-09-25
3 paul 6 present 2020-09-02
4 marcus 10 present 2020-07-01
5 marcus 10 present 2020-08-17
I trying to get status=absent True/False for each user between start_day and last_day.
Ex: user - anthony has two records in attendance_df between start_day and last_day out of total 3 records.
From those two records if status=absent then mark that user as True
Expected Output
name roll absent
0 anthony 9 True
1 paul 6 False
2 marcus 10 False
I have tried making details_df into a list then looping into attendance_df. But is there any other efficient way?
You need to do merge (i.e. a join operation) and filter the days for which the column day is between start_day and last_day. Then, a group-by + apply (i.e. grouped aggregation operation):
merged_df = attendance_df.merge(details_df, on=['name', 'roll'])
df = (merged_df[merged_df.day.between(merged_df.start_day, merged_df.last_day)]
.groupby(['name', 'roll'])
.apply(lambda x: (x.status == 'absent').any())
.reset_index())
df.columns = ['name', 'roll', 'absent']
To get:
df
name roll absent
0 anthony 9 True
1 marcus 10 False
2 paul 6 False
Merge, groupby() and find any days that are between start and last using a lambda function
df2=pd.merge(attendance_df,details_df, how='left', on=['name','roll'])
df2.groupby(['name','roll']).apply(lambda x: (x['day'].\
between(x['start_day'],x['last_day'])).any(0)).to_frame('absent')
absent
name roll
anthony 9 True
marcus 10 True
paul 6 True

How to shift the values of a certain group by different amounts

I have a DataFrame that looks like this:
user data
0 Kevin 1
1 Kevin 3
2 Sara 5
3 Kevin 23
...
And I want to get the historical values (looking let's say 2 entries forward) as rows:
user data data_1 data_2
0 Kevin 1 3 23
1 Sara 5 24 NaN
2 Kim ...
...
Right now I'm able to do this through the following command:
_temp = df.groupby(['user'], as_index = False)['data']
for i in range(1,2):
data['data_{0}'.format(i)] = _temp.shift(-1)
I feel like my approach is very inefficient and that there is a much faster way to do this (esp. when the number of lookahead/lookback values go up)!
You can use groupby.cumcount() with set_index() and unstack():
m=df.assign(k=df.groupby('user').cumcount().astype(str)).set_index(['user','k']).unstack()
m.columns=m.columns.map('_'.join)
print(m)
data_0 data_1 data_2
user
Kevin 1.0 3.0 23.0
Sara 5.0 NaN NaN

Pandas nested loop with a row matching a specific value

What is the fastest way to iterate through the rest of a dataframe given rows matching some specific values ?
For example let's say I have a dataframe with 'Date', 'Name' and 'Movie'. There could be many users and movies. I want all the person named John that have seen the same movie as someone named Alicia has seen before.
Input dataframe could be :
date name movie
0 2018-01-16 10:33:59 Alicia Titanic
1 2018-01-17 08:49:13 Chandler Avatar
2 2018-01-18 09:29:09 Luigi Glass
3 2018-01-19 09:45:27 Alicia Die Hard
4 2018-01-20 10:08:05 Bouchra Pulp Fiction
5 2018-01-26 10:21:47 Bariza Glass
6 2018-01-27 10:15:32 Peggy Bumbleblee
7 2018-01-20 10:08:05 John Titanic
8 2018-01-26 10:21:47 Bariza Glass
9 2018-01-27 10:15:32 John Titanic
The result should be :
date name movie
0 2018-01-16 10:33:59 Alicia Titanic
7 2018-01-20 10:08:05 John Titanic
9 2018-01-27 10:15:32 John Titanic
For the moment I am doing the following:
alicias = df[df['Name'] == 'Alicia']
df_res = pd.DataFrame(columns=df.columns)
for i in alicias.index:
df_res = df_res.append(alicias.loc[i], sort=False)
df_johns = df[(df['Date'] > alicias['Date'][i])
&(df['Name'] == 'John')
&(df['Movie'] == alicias['Movie'][i)]
df_res = df_res.append(df_johns, sort=False)
It works but this is very slow. I could also use a groupby which is much faster but I want the result to keep the initial row (the row with 'Alicia' in the example), and I can't find a way to do this with a groupby.
Any help ?
Here's a way to do it. Say you have the following dataframe:
date user movie
0 2018-01-02 Alicia Titanic
1 2018-01-13 John Titanic
2 2018-01-22 John Titanic
3 2018-04-02 John Avatar
4 2018-04-05 Alicia Avatar
5 2018-05-19 John Avatar
IIUC the correct solution should not contain row 3, as Alicia had not seen Avatar yet. So you could do:
df[df.user.eq('Alicia').groupby(df.movie).cumsum()]
date user movie
0 2018-01-02 Alicia Titanic
1 2018-01-13 John Titanic
2 2018-01-22 John Titanic
4 2018-04-05 Alicia Avatar
5 2018-05-19 John Avatar
Explanation:
The following returns True where the user is Alicia:
df.user.eq('Alicia')
0 True
1 False
2 False
3 False
4 True
5 False
Name: user, dtype: bool
What you could do now is to GroupBy the movies, and apply a cumsum on the groups, so only the rows after the first True will also become True:
0 True
1 True
2 True
3 False
4 True
5 True
Name: user, dtype: bool
Finally use boolean indexation on the original dataframe in order to select the rows of interest.

Is there an "ungroup by" operation opposite to .groupby in pandas?

Suppose we take a pandas dataframe...
name age family
0 john 1 1
1 jason 36 1
2 jane 32 1
3 jack 26 2
4 james 30 2
Then do a groupby() ...
group_df = df.groupby('family')
group_df = group_df.aggregate({'name': name_join, 'age': pd.np.mean})
Then do some aggregate/summarize operation (in my example, my function name_join aggregates the names):
def name_join(list_names, concat='-'):
return concat.join(list_names)
The grouped summarized output is thus:
age name
family
1 23 john-jason-jane
2 28 jack-james
Question:
Is there a quick, efficient way to get to the following from the aggregated table?
name age family
0 john 23 1
1 jason 23 1
2 jane 23 1
3 jack 28 2
4 james 28 2
(Note: the age column values are just examples, I don't care for the information I am losing after averaging in this specific example)
The way I thought I could do it does not look too efficient:
create empty dataframe
from every line in group_df, separate the names
return a dataframe with as many rows as there are names in the starting row
append the output to the empty dataframe
The rough equivalent is .reset_index(), but it may not be helpful to think of it as the "opposite" of groupby().
You are splitting a string in to pieces, and maintaining each piece's association with 'family'. This old answer of mine does the job.
Just set 'family' as the index column first, refer to the link above, and then reset_index() at the end to get your desired result.
It turns out that pd.groupby() returns an object with the original data stored in obj. So ungrouping is just pulling out the original data.
group_df = df.groupby('family')
group_df.obj
Example
>>> dat_1 = df.groupby("category_2")
>>> dat_1
<pandas.core.groupby.generic.DataFrameGroupBy object at 0x7fce78b3dd00>
>>> dat_1.obj
order_date category_2 value
1 2011-02-01 Cross Country Race 324400.0
2 2011-03-01 Cross Country Race 142000.0
3 2011-04-01 Cross Country Race 498580.0
4 2011-05-01 Cross Country Race 220310.0
5 2011-06-01 Cross Country Race 364420.0
.. ... ... ...
535 2015-08-01 Triathalon 39200.0
536 2015-09-01 Triathalon 75600.0
537 2015-10-01 Triathalon 58600.0
538 2015-11-01 Triathalon 70050.0
539 2015-12-01 Triathalon 38600.0
[531 rows x 3 columns]
Here's a complete example that recovers the original dataframe from the grouped object
def name_join(list_names, concat='-'):
return concat.join(list_names)
print('create dataframe\n')
df = pandas.DataFrame({'name':['john', 'jason', 'jane', 'jack', 'james'], 'age':[1,36,32,26,30], 'family':[1,1,1,2,2]})
df.index.name='indexer'
print(df)
print('create group_by object')
group_obj_df = df.groupby('family')
print(group_obj_df)
print('\nrecover grouped df')
group_joined_df = group_obj_df.aggregate({'name': name_join, 'age': 'mean'})
group_joined_df
create dataframe
name age family
indexer
0 john 1 1
1 jason 36 1
2 jane 32 1
3 jack 26 2
4 james 30 2
create group_by object
<pandas.core.groupby.generic.DataFrameGroupBy object at 0x7fbfdd9dd048>
recover grouped df
name age
family
1 john-jason-jane 23
2 jack-james 28
print('\nRecover the original dataframe')
print(pandas.concat([group_obj_df.get_group(key) for key in group_obj_df.groups]))
Recover the original dataframe
name age family
indexer
0 john 1 1
1 jason 36 1
2 jane 32 1
3 jack 26 2
4 james 30 2
There are a few ways to undo DataFrame.groupby, one way is to do DataFrame.groupby.filter(lambda x:True), this gets back to the original DataFrame.

Categories