How to keep the last group within a group using Pandas?
For example, given the following dataset:
id product date
0 220 6647 2014-09-01
1 220 6647 2014-09-03
2 220 6647 2014-10-16
3 826 6647 2014-11-11
4 826 6647 2014-12-09
5 826 6647 2015-05-19
6 901 4555 2014-09-01
7 901 4555 2014-10-05
8 901 4555 2014-11-01
9 401 4555 2015-01-05
10 401 4555 2015-02-01
how to get the last id group from each of the product group succinctly?
id product date
3 826 6647 2014-11-11
4 826 6647 2014-12-09
5 826 6647 2015-05-19
9 401 4555 2015-01-05
10 401 4555 2015-02-01
Related
Same question as here: group by pandas dataframe and select latest in each group, except instead of latest date, would like to get next upcoming date for each group.
So given a dataframe sorted by date:
id product date
0 220 6647 2020-09-01
1 220 6647 2020-10-03
2 220 6647 2020-12-16
3 826 3380 2020-11-11
4 826 3380 2020-12-09
5 826 3380 2021-05-19
6 901 4555 2020-09-01
7 901 4555 2020-12-01
8 901 4555 2021-11-01
Using todays date (2020-12-01) to determine the next upcoming date, grouping by id or product and selecting the the next upcoming date should give:
id product date
2 220 6647 2020-12-16
5 826 3380 2020-12-09
8 901 4555 2021-11-01
Filter the dates first, then drop duplicates:
df[df['date']>'2020-12-01'].sort_values(['id','date']).drop_duplicates('id')
Output:
id product date
2 220 6647 2020-12-16
4 826 3380 2020-12-09
8 901 4555 2021-11-01
My question is based on this thread, where we group values of a pandas dataframe and select the latest (by date) from each group:
id product date
0 220 6647 2014-09-01
1 220 6647 2014-09-03
2 220 6647 2014-10-16
3 826 3380 2014-11-11
4 826 3380 2014-12-09
5 826 3380 2015-05-19
6 901 4555 2014-09-01
7 901 4555 2014-10-05
8 901 4555 2014-11-01
using the following
df.loc[df.groupby('id').date.idxmax()]
Say, however, that I want to include the condition that I only want to select the latest (by date) from each group within +/- 5 days. I.e., after grouping I want to find the latest within the following groups:
0 220 6647 2014-09-01 #because only these two are within +/- 5 days of each other
1 220 6647 2014-09-03
2 220 6647 2014-10-16 #spaced more than 5 days apart the above two records
3 826 3380 2014-11-11
.....
which yields
id product date
1 220 6647 2014-09-03
2 220 6647 2014-10-16
3 826 3380 2014-11-11
4 826 3380 2014-12-09
5 826 3380 2015-05-19
5 826 3380 2015-05-19
6 901 4555 2014-09-01
7 901 4555 2014-10-05
8 901 4555 2014-11-01
Dataset with price:
id product date price
0 220 6647 2014-09-01 100 #group 1
1 220 6647 2014-09-03 120 #group 1 --> pick this
2 220 6647 2014-09-05 0 #group 1
3 826 3380 2014-11-11 150 #group 2 --> pick this
4 826 3380 2014-12-09 23 #group 3 --> pick this
5 826 3380 2015-05-12 88 #group 4 --> pick this
6 901 4555 2015-05-15 32 #group 4
7 901 4555 2015-10-05 542 #group 5 --> pick this
8 901 4555 2015-11-01 98 #group 6 --> pick this
I think you need create groups by apply with list comprehension and between, then convert to numeric groups by factorize, last use your solution with loc + idxmax:
df['date'] = pd.to_datetime(df['date'])
df = df.reset_index(drop=True)
td = pd.Timedelta('5 days')
def f(x):
x['g'] = [tuple((x.index[x['date'].between(i - td, i + td)])) for i in x['date']]
return x
df2 = df.groupby('id').apply(f)
df2['g'] = pd.factorize(df2['g'])[0]
print (df2)
id product date price g
0 220 6647 2014-09-01 100 0
1 220 6647 2014-09-03 120 0
2 220 6647 2014-09-05 0 0
3 826 3380 2014-11-11 150 1
4 826 3380 2014-12-09 23 2
5 826 3380 2015-05-12 88 3
6 901 4555 2015-05-15 32 4
7 901 4555 2015-10-05 542 5
8 901 4555 2015-11-01 98 6
df3 = df2.loc[df2.groupby('g')['price'].idxmax()]
print (df3)
id product date price g
1 220 6647 2014-09-03 120 0
3 826 3380 2014-11-11 150 1
4 826 3380 2014-12-09 23 2
5 826 3380 2015-05-12 88 3
6 901 4555 2015-05-15 32 4
7 901 4555 2015-10-05 542 5
8 901 4555 2015-11-01 98 6
Or use a two-liner:
df2=pd.to_numeric(df.groupby('id')['date'].diff(-1).astype(str).str[:-25]).abs().fillna(6)
print(df.loc[df2.index[df2>5].tolist()])
Output:
id product date
1 220 6647 2014-09-03
2 220 6647 2014-10-16
3 826 3380 2014-11-11
4 826 3380 2014-12-09
5 826 3380 2015-05-19
6 901 4555 2014-09-01
7 901 4555 2014-10-05
8 901 4555 2014-11-01
So use diff and slice using string slice, and absolute all the values, then drop the ones less than 5, get those indexes, then get the indexes in the in df.
how do I add a column for id in dataframe? The values between 0 to 100 should have an id of 1 otherwise 2.
time values
2018-03-19 14:31:17.200 1095
2018-03-19 14:31:17.300 2296
2018-03-19 14:31:17.400 2147
2018-03-19 14:31:17.500 309
2018-03-19 14:31:17.600 244
2018-03-19 14:31:17.700 263
2018-03-19 14:31:17.800 548
I think need numpy.where with condition created by between (default inclusive=True):
df['id'] = np.where(df['values'].between(0,100), 1,2)
print (df)
time values id
1 2018-03-19 14:31:17.200 1095 2
2 2018-03-19 14:31:17.300 2296 2
3 2018-03-19 14:31:17.400 2147 2
4 2018-03-19 14:31:17.500 309 2
5 2018-03-19 14:31:17.600 244 2
6 2018-03-19 14:31:17.700 263 2
7 2018-03-19 14:31:17.800 548 2
A B C D yearweek
0 245 95 60 30 2014-48
1 245 15 70 25 2014-49
2 150 275 385 175 2014-50
3 100 260 170 335 2014-51
4 580 925 535 2590 2015-02
5 630 126 485 2115 2015-03
6 425 90 905 1085 2015-04
7 210 670 655 945 2015-05
The last column contains the the year along with the weeknumber. Is it possible to convert this to a datetime column with pd.to_datetime?
I've tried:
pd.to_datetime(df.yearweek, format='%Y-%U')
0 2014-01-01
1 2014-01-01
2 2014-01-01
3 2014-01-01
4 2015-01-01
5 2015-01-01
6 2015-01-01
7 2015-01-01
Name: yearweek, dtype: datetime64[ns]
But that output is incorrect, while I believe %U should be the format string for week number. What am I missing here?
You need another parameter for specify day - check this:
df = pd.to_datetime(df.yearweek.add('-0'), format='%Y-%W-%w')
print (df)
0 2014-12-07
1 2014-12-14
2 2014-12-21
3 2014-12-28
4 2015-01-18
5 2015-01-25
6 2015-02-01
7 2015-02-08
Name: yearweek, dtype: datetime64[ns]
I've been searching SO and haven't figured this out yet. Hoping someone can aide this python newb to solving my problem.
I'm trying to figure out how to write an if/then statement in python and perform an aggregation off that if/then statement. My end goal is to say if the date = 1/7/2017 then use the value in the "fake" column. If date = all else then average the two columns together.
Here is what I have so far:
import pandas as pd
import numpy as np
import datetime
np.random.seed(42)
dte=pd.date_range(start=datetime.date(2017,1,1), end= datetime.date(2017,1,15))
fake=np.random.randint(15,100, size=15)
fake2=np.random.randint(300,1000,size=15)
so_df=pd.DataFrame({'date':dte,
'fake':fake,
'fake2':fake2})
so_df['avg']= so_df[['fake','fake2']].mean(axis=1)
so_df.head()
Assuming you have already computed the average column:
so_df['fake'].where(so_df['date']=='20170107', so_df['avg'])
Out:
0 375.5
1 260.0
2 331.0
3 267.5
4 397.0
5 355.0
6 89.0
7 320.5
8 449.0
9 395.5
10 197.0
11 438.5
12 498.5
13 409.5
14 525.5
Name: fake, dtype: float64
If not, you can replace the column reference with the same calculation:
so_df['fake'].where(so_df['date']=='20170107', so_df[['fake','fake2']].mean(axis=1))
To check for multiple dates, you need to use the element-wise version of the or operator (which is pipe: |). Otherwise it will raise an error.
so_df['fake'].where((so_df['date']=='20170107') | (so_df['date']=='20170109'), so_df['avg'])
The above checks for two dates. In the case of 3 or more, you may want to use isin with a list:
so_df['fake'].where(so_df['date'].isin(['20170107', '20170109', '20170112']), so_df['avg'])
Out[42]:
0 375.5
1 260.0
2 331.0
3 267.5
4 397.0
5 355.0
6 89.0
7 320.5
8 38.0
9 395.5
10 197.0
11 67.0
12 498.5
13 409.5
14 525.5
Name: fake, dtype: float64
Let's use np.where:
so_df['avg'] = np.where(so_df['date'] == pd.to_datetime('2017-01-07'),
so_df['fake'], so_df[['fake',
'fake2']].mean(1))
Output:
date fake fake2 avg
0 2017-01-01 66 685 375.5
1 2017-01-02 29 491 260.0
2 2017-01-03 86 576 331.0
3 2017-01-04 75 460 267.5
4 2017-01-05 35 759 397.0
5 2017-01-06 97 613 355.0
6 2017-01-07 89 321 89.0
7 2017-01-08 89 552 320.5
8 2017-01-09 38 860 449.0
9 2017-01-10 17 774 395.5
10 2017-01-11 36 358 197.0
11 2017-01-12 67 810 438.5
12 2017-01-13 16 981 498.5
13 2017-01-14 44 775 409.5
14 2017-01-15 52 999 525.5
One way to do if-else in pandas is by using np.where
There are three values inside, condition, if and else
so_df['avg']= np.where(so_df['date'] == '2017-01-07',so_df['fake'],so_df[['fake','fake2']].mean(axis=1))
date fake fake2 avg
0 2017-01-01 66 685 375.5
1 2017-01-02 29 491 260.0
2 2017-01-03 86 576 331.0
3 2017-01-04 75 460 267.5
4 2017-01-05 35 759 397.0
5 2017-01-06 97 613 355.0
6 2017-01-07 89 321 89.0
7 2017-01-08 89 552 320.5
8 2017-01-09 38 860 449.0
9 2017-01-10 17 774 395.5
10 2017-01-11 36 358 197.0
11 2017-01-12 67 810 438.5
12 2017-01-13 16 981 498.5
13 2017-01-14 44 775 409.5
14 2017-01-15 52 999 525.5
we can also use Series.where() method:
In [141]: so_df['avg'] = so_df['fake'] \
...: .where(so_df['date'].isin(['2017-01-07','2017-01-09']))
...: .fillna(so_df[['fake','fake2']].mean(1))
...:
In [142]: so_df
Out[142]:
date fake fake2 avg
0 2017-01-01 66 685 375.5
1 2017-01-02 29 491 260.0
2 2017-01-03 86 576 331.0
3 2017-01-04 75 460 267.5
4 2017-01-05 35 759 397.0
5 2017-01-06 97 613 355.0
6 2017-01-07 89 321 89.0
7 2017-01-08 89 552 320.5
8 2017-01-09 38 860 38.0
9 2017-01-10 17 774 395.5
10 2017-01-11 36 358 197.0
11 2017-01-12 67 810 438.5
12 2017-01-13 16 981 498.5
13 2017-01-14 44 775 409.5
14 2017-01-15 52 999 525.5