How to get most recent order date? - python

I am doing an external exercise where I have a set of data of customers' purchases.
I have the following columns: customer_id, date, gender, value (purchase value). One part of the exercise is to create a new column named most_recent_order_date. How should I go about accomplishing this?
I tried
df['most_recent_order_date']=df.sort_values('customer_id',ascending=False)['date']
but this only returns the dates of all purchases in ascending order. I need it to be customer_id specific since a customer_id might have multiple purchases.
Another part of the exercise is to create a order_count column which is what the last column is.
data= pd.read_csv('screening_exercise_orders_v201810.csv')
df=pd.DataFrame(data)
df['most_recent_order_date']= 'default value'
df['order_count']= 'default value'
df['date'] = pd.to_datetime(df['date'])
df['most_recent_order_date']=df.sort_values('customer_id',ascending=False)['date']
df['order_count']= df.groupby(['customer_id']).transform('count')
df.head(10)
I expect something like:
0 1000 0 2017-01-01 00:11:31 198.50 1 2017-02-10 00:11: 1
1 1001 0 2017-01-01 00:29:56 338.00 1 2017-11-01 00:29:56 1
2 1002 1 2017-01-01 01:30:31 733.00 1 2017-06-11 01:30:31 3
3 1003 1 2017-01-01 01:34:22 772.00 1 2017-05-14 01:34:22 4
4 1004 0 2017-01-01 03:11:54 508.00 1 2017-01-01 03:11:54 1
But what I actually get is:
0 1000 0 2017-01-01 00:11:31 198.50 1 2017-01-01 00:11:31 1
1 1001 0 2017-01-01 00:29:56 338.00 1 2017-01-01 00:29:56 1
2 1002 1 2017-01-01 01:30:31 733.00 1 2017-01-01 01:30:31 3
3 1003 1 2017-01-01 01:34:22 772.00 1 2017-01-01 01:34:22 4
4 1004 0 2017-01-01 03:11:54 508.00 1 2017-01-01 03:11:54 1

For most recent date, use groupby.transform with max:
df['date'] = pd.to_datetime(df['date'])
df['most_recent_date'] = df.groupby(['customer_id'])['date'].transform('max')
For count use groupby.cumcount:
df['order_count'] = df.groupby(['customer_id']).cumcount().add(1)

Related

consecutive rows of unsorted dates based on one day before after or on the same day into one [duplicate]

I would like to combine rows of same id with consecutive dates and same features values.
I have the following dataframe:
Id Start End Feature1 Feature2
0 A 2020-01-01 2020-01-15 1 1
1 A 2020-01-16 2020-01-30 1 1
2 A 2020-01-31 2020-02-15 0 1
3 A 2020-07-01 2020-07-15 0 1
4 B 2020-01-31 2020-02-15 0 0
5 B 2020-02-16 NaT 0 0
An the expected result is:
Id Start End Feature1 Feature2
0 A 2020-01-01 2020-01-30 1 1
1 A 2020-01-31 2020-02-15 0 1
2 A 2020-07-01 2020-07-15 0 1
3 B 2020-01-31 NaT 0 0
I have been trying other posts answers but they don't really match with my use case.
Thanks in advance!
You can approach by:
Get the day diff of each consecutive entries within same group by substracting current Start with last End with the group using GroupBy.shift().
Set group number group_no such that new group number is issued when day diff with previous entry within the group is greater than 1.
Then, group by Id and group_no and aggregate for each group the Start and End dates using .gropuby() and .agg()
As there is NaT data within the grouping, we need to specify dropna=False during grouping. Furthermore, to get the last entry of End within the group, we use x.iloc[-1] instead of last.
# convert to datetime format if not already in datetime
df['Start'] = pd.to_datetime(df['Start'])
df['End'] = pd.to_datetime(df['End'])
# sort by columns `Id` and `Start` if not already in this sequence
df = df.sort_values(['Id', 'Start'])
day_diff = (df['Start'] - df['End'].groupby([df['Id'], df['Feature1'], df['Feature2']]).shift()).dt.days
group_no = (day_diff.isna() | day_diff.gt(1)).cumsum()
df_out = (df.groupby(['Id', group_no], dropna=False, as_index=False)
.agg({'Id': 'first',
'Start': 'first',
'End': lambda x: x.iloc[-1],
'Feature1': 'first',
'Feature2': 'first',
}))
Result:
print(df_out)
Id Start End Feature1 Feature2
0 A 2020-01-01 2020-01-30 1 1
1 A 2020-01-31 2020-02-15 0 1
2 A 2020-07-01 2020-07-15 0 1
3 B 2020-01-31 NaT 0 0
Extract months from both date column
df['sMonth'] = df['Start'].apply(pd.to_datetime).dt.month
df['eMonth'] = df['End'].apply(pd.to_datetime).dt.month
Now groupby data frame with ['Id','Feature1','Feature2','sMonth','eMonth'] and we get result
df.groupby(['Id','Feature1','Feature2','sMonth','eMonth']).agg({'Start':'min','End':'max'}).reset_index().drop(['sMonth','eMonth'],axis=1)
Result
Id Feature1 Feature2 Start End
0 A 0 1 2020-01-31 2020-02-15
1 A 0 1 2020-07-01 2020-07-15
2 A 1 1 2020-01-01 2020-01-30
3 B 0 0 2020-01-31 2020-02-15

Check if date in one dataframe is between two dates in another dataframe, by group

I have the following problem. I've got a dataframe with start and end dates for each group. There might be more than one start and end date per group, like this:
group start_date end_date
1 2020-01-03 2020-03-03
1 2020-05-03 2020-06-03
2 2020-02-03 2020-06-03
And another dataframe with one row per date, per group, like this:
group date
1 2020-01-03
1 2020-02-03
1 2020-03-03
1 2020-04-03
1 2020-05-03
1 2020-06-03
2 2020-02-03
3 2020-03-03
4 2020-04-03
.
.
So I want to create a column is_between in an efficient way, ideally avoiding loops, so I get the following dataframe
group date is_between
1 2020-01-03 1
1 2020-02-03 1
1 2020-03-03 1
1 2020-04-03 0
1 2020-05-03 1
1 2020-06-03 1
2 2020-02-03 1
3 2020-03-03 1
4 2020-04-03 1
.
.
So it gets a 1 when a group's date is between the dates in the first dataframe. I'm guessing some combination of groupby, where, between and maybe map might do it, but I'm not finding the correct one. Any ideas?
Based on #YOBEN_S and #Quang Hoang's advice this made it:
df = df.merge(dic_dates, how='left')
df['is_between'] = np.where(df.date.between(pd.to_datetime(df.start_date),
pd.to_datetime(df.end_Date)),1, 0)
df = (df.sort_values(by=['group', 'date', 'is_between'])
.drop_duplicates(subset=['group', 'date'], keep='last'))
you could try with merge_asof, by the group and on the date and start_date, then check where the date is less than end_date and finally assign back to the original df2
ser = (pd.merge_asof(df2.reset_index() #for later index alignment
.sort_values('date'),
df1.sort_values('start_date'),
by='group',
left_on='date', right_on='start_date',
direction='backward')
.assign(is_between=lambda x: x.date<=x.end_date)
.set_index(['index'])['is_between']
)
df2['is_between'] = ser.astype(int)
print (df2)
group date is_between
0 1 2020-01-03 1
1 1 2020-02-03 1
2 1 2020-03-03 1
3 1 2020-04-03 0
4 1 2020-05-03 1
5 1 2020-06-03 1
6 2 2020-02-03 1
7 3 2020-03-03 0
8 4 2020-04-03 0

Pandas groupby transform to get not null date value

I have a dataframe constructed as so:
df = pd.DataFrame({'id': [1,2,3,4,1,2,3,4],
'birthdate': ['01-01-01','02-02-02','03-03-03','04-04-04',
'','02-02-02','03-04-04','04-03-04']})
df['birthdate'] = pd.to_datetime(df['birthdate'])
I want to do a groupby to change the original data using pandas .transform
The condition is that I want to pick the birthdate value of the first not null row per id
I know I can do max if no other option is available to get rid of the not null entries, but if there are inconsistencies, I don't necessarily want the maximum date, just the one that occurs first in the dataframe.
As such:
df['birthdate'] = df.groupby('id')['birthdate'].transform(max)
This is how output looks using max:
id birthdate
0 1 2001-01-01
1 2 2002-02-02
2 3 2003-03-03
3 4 2004-04-04
4 1 2001-01-01
5 2 2002-02-02
6 3 2004-03-04
7 4 2004-04-04
This is how I actually want it to look:
id birthdate
0 1 2001-01-01
1 2 2002-02-02
2 3 2003-03-03
3 4 2004-04-04
4 1 2001-01-01
5 2 2002-02-02
6 3 2003-03-03
7 4 2004-04-04
I'm pretty sure I have to create a customer lambda to put inside the .transform but I am unsure what condition to use.
You can try the following. Your dataframe definition and suggested outputs contain different dates, so I assumed your dataframe definition was correct
df['birthdate'] = df.groupby('id').transform('first')
which outputs.
id birthdate
0 1 2001-01-01
1 2 2002-02-02
2 3 2003-03-03
3 4 2004-04-04
4 1 2001-01-01
5 2 2002-02-02
6 3 2003-03-03
7 4 2004-04-04

Add a dummy indicating change between consecutive rows in grouped dataframe

This is a follow up to my previous question here.
Assume a dataset like this (which originally is read in from a .csv):
data = pd.DataFrame({'id': [1,2,3,1,2,3,1,2,3],
'time':['2017-01-01 12:00:00','2017-01-01 12:00:00','2017-01-01 12:00:00',
'2017-01-01 12:10:00','2017-01-01 12:10:00','2017-01-01 12:10:00',
'2017-01-01 12:20:00','2017-01-01 12:20:00','2017-01-01 12:20:00'],
'values': [10,11,12,10,12,13,10,13,13]})
data = data.set_index('id')
=>
id time values
0 1 2017-01-01 12:00:00 10
1 2 2017-01-01 12:00:00 11
2 3 2017-01-01 12:00:00 12
3 1 2017-01-01 12:10:00 10
4 2 2017-01-01 12:10:00 12
5 3 2017-01-01 12:10:00 13
6 1 2017-01-01 12:20:00 10
7 2 2017-01-01 12:20:00 13
8 3 2017-01-01 12:20:00 13
Time is identical for all IDs in each observation period. The series goes on like that for many observations, i.e. every ten minutes.
Previously, I learned how to get the total number of changes in values between two consecutive periods for each id:
data.groupby(data.index).values.apply(lambda x: (x != x.shift()).sum() - 1)
This works great and is really fast. Now, I am interested in adding a new column to the df. It should be a dummy indicating for each row in values if there was a change between the current and previous row. Thus, the result would be as follows:
=>
id time values change
0 1 2017-01-01 12:00:00 10 0
1 2 2017-01-01 12:00:00 11 0
2 3 2017-01-01 12:00:00 12 0
3 1 2017-01-01 12:10:00 10 0
4 2 2017-01-01 12:10:00 12 1
5 3 2017-01-01 12:10:00 13 1
6 1 2017-01-01 12:20:00 10 0
7 2 2017-01-01 12:20:00 13 1
8 3 2017-01-01 12:20:00 13 0
After fiddling around, I came up with a solution. However, it is really slow. It won't run on my actual dataset which is rather big:
def calc_change(x):
x = (x != x.shift())
x.iloc[0,] = False
return x
changes = data.groupby(data.index, as_index=False).values.apply(
calc_change).reset_index().iloc[:,2]
data = data.sort_index().reset_index()
data.loc[changes, 'change'] = 1
data = data.fillna(0)
I'm sure there are better and appreciate any help!
You can use this solution if your id column is not set as index.
data['change'] = data.groupby(['id'])['values'].apply(lambda x: x.diff() > 0).astype(int)
You get
id time values change
0 1 2017-01-01 12:00:00 10 0
1 2 2017-01-01 12:00:00 11 0
2 3 2017-01-01 12:00:00 12 0
3 1 2017-01-01 12:10:00 10 0
4 2 2017-01-01 12:10:00 12 1
5 3 2017-01-01 12:10:00 13 1
6 1 2017-01-01 12:20:00 10 0
7 2 2017-01-01 12:20:00 13 1
8 3 2017-01-01 12:20:00 13 0
With id as index,
data = data.sort_index()
data['change'] = data.groupby(data.index)['values'].apply(lambda x: x.diff() > 0).astype(int)

Grouping dataframe by each 6 hour and generating a new column

I have this dataframe (type could be 1 or 2):
user_id | timestamp | type
1 | 2015-5-5 12:30 | 1
1 | 2015-5-5 14:00 | 2
1 | 2015-5-5 15:00 | 1
I want to group my data by six hours and when doing this I want to keep type as:
1 (if there is only 1 within that 6 hour frame)
2 (if there is only 2 within that 6 hour frame) or
3 (if there was both 1 and 2 within that 6 hour frame)
Here is the my code:
df = df.groupby(['user_id', pd.TimeGrouper(freq=(6,'H'))]).mean()
which produces:
user_id | timestamp | type
1 | 2015-5-5 12:00 | 4
However, I want to get 3 instead of 4. I wonder how can I replace the mean() in my groupby code to produce the desired output?
Try this:
In [54]: df.groupby(['user_id', pd.Grouper(key='timestamp', freq='6H')]) \
.agg({'type':lambda x: x.unique().sum()})
Out[54]:
type
user_id timestamp
1 2015-05-05 12:00:00 3
PS it'll work only with given types: (1, 2) as their sum is 3
Another data set:
In [56]: df
Out[56]:
user_id timestamp type
0 1 2015-05-05 12:30:00 1
1 1 2015-05-05 14:00:00 1
2 1 2015-05-05 15:00:00 1
3 1 2015-05-05 20:00:00 1
In [57]: df.groupby(['user_id', pd.Grouper(key='timestamp', freq='6H')]).agg({'type':lambda x: x.unique().sum()})
Out[57]:
type
user_id timestamp
1 2015-05-05 12:00:00 1
2015-05-05 18:00:00 1

Categories