I am new to python and pandas. I am trying to assign new session IDs for around 2270 users, based on the time difference between the timestamps. If the time difference exceeds 4 hours, I want a new session ID. Otherwise, it would have to remain the same. In the end, I want a modified data frame with the new session ID column. Here is what I have so far:
Eh_2016["NewSessionID"] = 1 #Initialize 'NewSessionID' column in df with 1
Eh_2016['elapsed'] = datetime.time(0,0,0,0) #Create an empty elapsed to calculate Time diff later
users = Eh_2016['Username'].unique() #find the number of unique Usernames
for user in users: #start of the loop
idx = Eh_2016.index[Eh_2016.Username == user] #Slice the original df
temp = Eh_2016[Eh_2016.Username == user] #Create a temp placeholder for the slice
counter = 1 # Initialize counter for NewSessionIDs
for i,t in enumerate(temp['Timestamp']): #Looping for each timestamp value
if i != 0 :
temp['elapsed'].iloc[i] = (t - temp['Timestamp'].iloc[i-1]) #Calculate diff btwn timestamps
if temp['elapsed'].iloc[i] > datetime.timedelta(hours = 4): #If time diff>4
counter +=1 #Increase counter value
temp['NewSessionID'].iloc[i]=counter #Assign new counter value as NewSessionID
else:
temp['NewSessionID'].iloc[i] = counter #Retain previous sessionID
Eh_2016.loc[idx,:]= temp #Replace original df with the updated slice
Any help on how to make this faster would be greatly appreciated! Let me know if you need more details. Thanks in advance.
Edit: Sample DF
Username Timestamp NewSessionID Elapsed
126842 1095513 2016-06-30 20:58:30.477 1 00:00:00
126843 1095513 2016-07-16 07:54:47.986 2 15 days 10:56:17.509000
126844 1095513 2016-07-16 07:54:47.986 2 0 days 00:00:00
126845 1095513 2016-07-16 07:55:10.986 2 0 days 00:00:23
126846 1095513 2016-07-16 07:55:13.456 2 0 days 00:00:02.470000
... ... ... ...
146920 8641894 2016-08-11 22:26:14.051 31 0 days 04:50:23.415000
146921 8641894 2016-08-11 22:26:14.488 31 0 days 00:00:00.437000
146922 8641894 2016-08-12 20:01:02.419 32 0 days 21:34:47.931000
146923 8641894 2016-08-23 10:19:05.973 33 10 days 14:18:03.554000
146924 8641894 2016-09-25 11:30:35.540 34 33 days 01:11:29.567000
Filtering the whole dataframe for each user is O(users*sessions), and it's not needed since you need to iterate over the whole thing anyway.
A more efficient approach would be to instead iterate over the dataframe in one pass, and store the temporary variables (counter, location of previous row, etc) in a separate dataframe indexed by users.
Eh_2016["NewSessionID"] = 1 #Initialize 'NewSessionID' column in df with 1
Eh_2016['elapsed'] = datetime.time(0,0,0,0) #Create an empty elapsed to calculate Time diff later
# create new dataframe of unique users
users = pd.DataFrame({'Username': Eh_2016['Username'].unique()}).set_index('Username')
# one column for the previous session looked at for each user
users['Previous'] = -1
# one column for the counter variable
users['Counter'] = 0
# iterate over each row
for index, row in Eh_2016.iterrows(): #start of the loop
user = row['Username']
previous = users[user, 'Previous']
if previous >= 0: # if this is not the first row for this user
Eh_2016.loc[index, 'elapsed'] = (row['Timestamp'] - Eh_2016.loc[previous, 'Timestamp']) #Calculate diff btwn timestamps
if Eh_2016.loc[index, 'elapsed'] > datetime.timedelta(hours = 4): #If time diff>4
users[user,'Counter'] += 1 #Increase counter value
Eh_2016.loc[index, 'NewSessionID'] = users[user,'Counter'] # Assign new counter value as NewSessionID
users[user, 'Previous'] = index # remember this row as the latest row for this user
Related
I have two dataframes one holds unique records of episodic data, the other lists of events. There are multiple events per episode. I need to loop through the episode data, find all the events that correspond to each episode and write the resultant events for a new dataframe. There are around 4,000 episodes and 20,000 events. The process is painfully slow as for each episode I am searching 20,000 events. I am guessing there is a way to reduce the number of events searched each loop by removing the matched ones - but I am not sure. This is my code (there is additional filtering to assist with matching)
for idx, row in episode_df.iterrows():
total_episodes += 1
icu_admission = datetime.strptime(row['ICU_ADM'], '%d/%m/%Y %H:%M:%S')
tmp_df = event_df.loc[event_df['ur'] == row['HRN']]
if ( len(tmp_df.index) < 1):
empty_episodes += 1
continue
# Loop through temp dataframe and write all records with an admission date
# close to icu_admission to new dataframe
for idx_a, row_a in tmp_df.iterrows():
admission = datetime.strptime(row_a['admission'], '%Y-%m-%d %H:%M:%S')
difference = admission - icu_admission
if (abs(difference.total_seconds()) > 14400):
continue
new_df = new_df.append(row_a)
selected_records += 1
A simplified version of the dataframes:
episode_df:
episode_no HRN name ICU_ADM
1 12345 joe date1
2 78124 ann date1
3 98374 bill date2
4 76523 lucy date3
event_df
episode_no ur admission
1 12345 date1
1 12345 date1
1 12345 date5
7 67899 date9
Not all episodes have events and only events with episodes need to be copied.
This could work:
import pandas as pd
import numpy as np
df1 = pd.DataFrame()
df1['ICU_ADM'] = [pd.to_datetime(f'2020-01-{x}') for x in range(1,10)]
df1['test_day'] = df1['ICU_ADM'].dt.day
df2 = pd.DataFrame()
df2['admission'] = [pd.to_datetime(f'2020-01-{x}') for x in range(2,10,3)]
df2['admission_day'] = df2['admission'].dt.day
df2['random_val'] = np.random.rand(len(df2),1)
pd.merge_asof(df1, df2, left_on=['ICU_ADM'], right_on=['admission'], tolerance=pd.Timedelta('1 day'))
I have two dataframes
import numpy as np
import pandas as pd
test1 = pd.date_range(start='1/1/2018', end='1/10/2018')
test1 = pd.DataFrame(test1)
test1.rename(columns = {list(test1)[0]: 'time'}, inplace = True)
test2 = pd.date_range(start='1/5/2018', end='1/20/2018')
test2 = pd.DataFrame(test2)
test2.rename(columns = {list(test2)[0]: 'time'}, inplace = True)
Now in first dataframe I create column
test1['values'] = np.zeros(10)
I want to fill this column, next to each date there should be the index of the closest date from second dataframe. I want it to look like this:
0 2018-01-01 0
1 2018-01-02 0
2 2018-01-03 0
3 2018-01-04 0
4 2018-01-05 0
5 2018-01-06 1
6 2018-01-07 2
7 2018-01-08 3
Of course my real data is not evenly spaced and has minutes and seconds, but the idea is same. I use the following code:
def nearest(items, pivot):
return min(items, key=lambda x: abs(x - pivot))
for k in range(10):
a = nearest(test2['time'], test1['time'][k]) ### find nearest timestamp from second dataframe
b = test2.index[test2['time'] == a].tolist()[0] ### identify the index of this timestamp
test1['value'][k] = b ### assign this value to the cell
This code is very slow on large datasets, how can I make it more efficient?
P.S. timestamps in my real data are sorted and increasing just like in these artificial examples.
You could do this in one line, using numpy's argmin:
test1['values'] = test1['time'].apply(lambda t: np.argmin(np.absolute(test2['time'] - t)))
Note that applying a lambda function is essentially also a loop. Check if that satisfies your requirements performance-wise.
You might also be able to leverage the fact that your timestamps are sorted and the timedelta between each timestamp is constant (if I got that correctly). Calculate the offset in days and derive the index vector, e.g. as follows:
offset = (test1['time'] - test2['time']).iloc[0].days
if offset < 0: # test1 time starts before test2 time, prepend zeros:
offset = abs(offset)
idx = np.append(np.zeros(offset), np.arange(len(test1['time'])-offset)).astype(int)
else: # test1 time starts after or with test2 time, use arange right away:
idx = np.arange(offset, offset+len(test1['time']))
test1['values'] = idx
I'm looking to add a field or two into my data set that represents the difference in sales from the last week to current week and from current week to the next week.
My dataset is about 4.5 million rows so I'm looking to find an efficient way of doing this, currently I'm getting into a lot of iteration and for loops and I'm quite sure I'm going about this the wrong way. but Im trying to write code that will be reusable on other datasets and there are situations where you might have nulls or no change in sales week to week (therefore there is no record)
The dataset looks like the following:
Store Item WeekID WeeklySales
1 1567 34 100.00
2 2765 34 86.00
3 1163 34 200.00
1 1567 35 160.00
. .
. .
. .
I have each week as its own dictionary and then each store sales for that week in a dictionary within. So I can use the week as a key and then within the week I access the store's dictionary of item sales.
weekly_sales_dict = {}
for i in df['WeekID'].unique():
store_items_dict = {}
subset = df[df['WeekID'] == i]
subset = subset.groupby(['Store', 'Item']).agg({'WeeklySales':'sum'}).reset_index()
for j in subset['Store'].unique():
storeset = subset[subset['Store'] == j]
store_items_dict.update({str(j): storeset})
weekly_sales_dict.update({ str(i) : store_items_dict})
Then I iterate through each week in the weekly_sales_dict and compare each store/item within it to the week behind it (I planned to do the same for the next week as well). The 'lag_list' I create can be indexed by week, store, and Item so I was going to iterate through and add the values to my df as a new lag column but I feel I am way overthinking this.
count = 0
key_list = list(df['WeekID'].unique())
lag_list = []
for k,v in weekly_sales_dict.items():
if count != 0 and count != len(df['WeekID'].unique())-1:
prev_wk = weekly_sales_dict[str(key_list[(count - 1)])]
current_wk = weekly_sales_dict[str(key_list[count])
for i in df['Store'].unique():
prev_df = prev_wk[str(i)]
current_df = current_wk[str(i)]
for j in df['Item'].unique():
print('in j')
if j in list(current_df['Item'].unique()) and j in list(prev_df['Item'].unique()):
item_lag = current_df[current_df['Item'] == int(j)]['WeeklySales'].values - prev_df[prev_df['Item'] == int(j)]['WeeklySales'].values
df[df['Item'] == j][df['Store'] == i ][df['WeekID'] == key_list[count]]['lag'] = item_lag[0]
lag_list.append((str(i),str(j),item_lag[0]))
elif j in list(current_df['Item'].unique()):
item_lag = current_df[current_df['Item'] == int(j)]['WeeklySales'].values
lag_list.append((str(i),str(j),item_lag[0]))
else:
pass
count += 1
else:
count += 1
Using pd.diff() the problem was solved. I sorted all rows by week, then created a subset with a multi-index by grouping on store,items,and week. Finally I used pd.diff() with a period of 1 and I ended up with the sales difference from the current week to the week prior.
df = df.sort_values(by = 'WeekID')
subset = df.groupby(['Store', 'Items', 'WeekID']).agg({''WeeklySales'':'sum'})
subset['lag'] = subset[['WeeklySales']].diff(1)
I have a pandas dataframe df that looks like the following:
df
Out[16]:
Start End Value Start Realtime End Realtime Duration
0 0 2999 1 736051 736051 59.98
1 3000 104999 0 736051 736051 5639.98
For each row, I would need to check the Start Realtime and End Realtime column and if they are across one day (eg. Start Realtime[0] = 29-05-2016 22:30:00 and End Realtime[0]=30=05-2006 01:00:00 I should split the row in 2:
one from Start Realtime = 29-05-2016 22:30:00 until End Realtime = 29-05-2016 23:59:59
and
one from Start Realtime = 30-05-2016 00:00:00 until End Realtime = 30-05-2016 01:00:00
keeping the same value in the Value column and recalculating the duration (in seconds) and start and end columns (in samples)
It would be nice if I can keep the cut off time (in this example midnight) flexible..
Just take it row by row for starters. The idea is if you have a row you need to split, then return a dataframe with two rows; otherwise return a dataframe with one. And then append it on to the new dataframe you are creating.
expanded_df = pd.DataFrame()
for i, row in df.iterrows():
expanded_df = expanded_df.append(applyFunc(row), ignore_index=True)
For each row, create a cutoff time datetime object that is the closest to the start_time but after it. Then just see whether it falls between the start_time and end_time. Finally if it requires a split, create two new pandas series to return with the changed values.
def applyFunc(row):
start_time = datetime.datetime.fromtimestamp(row["Start Realtime"])
end_time = # Similar to above
custom_hour = 11
# custom_minute = ...
cutoff_time = # Start with datetime.datetime(start_time.year, start_time.month, start_time.day, custom_hour, 0, 0) and see how you need to adjust with datetime.timedelta(...)
if start_time < cutoff_time < end_time:
before_cutoff = # Logic for before_cutoff; you will probably find row.set_value("key", value) useful
after_cutoff = # Logic for after_cutoff series
return pd.DataFrame([before_cutoff,after_cutoff])
else:
return row
Consider the below pandas DataFrame:
from pandas import Timestamp
df = pd.DataFrame({
'day': [Timestamp('2017-03-27'),
Timestamp('2017-03-27'),
Timestamp('2017-04-01'),
Timestamp('2017-04-03'),
Timestamp('2017-04-06'),
Timestamp('2017-04-07'),
Timestamp('2017-04-11'),
Timestamp('2017-05-01'),
Timestamp('2017-05-01')],
'act_id': ['916298883',
'916806776',
'923496071',
'926539428',
'930641527',
'931935227',
'937765185',
'966163233',
'966417205']
})
As you may see, there are 9 unique ids distributed in 7 days.
I am looking for a way to add two new columns.
The first column:
An increment number for each new day. For example 1 for '2017-03-27'(same number for same day), 2 for '2017-04-01', 3 for '2017-04-03', etc.
The second column:
An increment number for each new act_id per day. For example 1 for '916298883', 2 for '916806776' (which is linked to the same day '2017-03-27'), 1 for '923496071', 1 for '926539428', etc.
The final table should look like this
I have already tried to build the first column with apply and a function but it doesn't work as it should.
#Create helper function to give index number to a new column
counter = 1
def giveFlag(x):
global counter
index = counter
counter+=1
return index
And then:
# Create day flagger column
df_helper['day_no'] = df_helper['day'].apply(lambda x: giveFlag(x))
try this:
days = list(set(df['day']))
days.sort()
day_no = list()
iter_no = list()
for index,day in enumerate(days):
counter=1
for dfday in df['day']:
if dfday == day:
iter_no.append(counter)
day_no.append(index+1)
counter+=1
df['day_no'] = pd.Series(day_no).values
df['iter_no'] = pd.Series(iter_no).values