I am working with panel time-series data and am struggling with creating a fast for loop, to sum up, the past 50 numbers at the current i. The data is like 600k rows, and it starts to churn around 30k. Is there a way to use pandas or Numpy to do the same at a fraction of the time?
The change column is of type float, with 4 decimals.
Index Change
0 0.0410
1 0.0000
2 0.1201
... ...
74327 0.0000
74328 0.0231
74329 0.0109
74330 0.0462
SEQ_LEN = 50
for i in range(SEQ_LEN, len(df)):
df.at[i, 'Change_Sum'] = sum(df['Change'][i-SEQ_LEN:i])
Any help would be highly appreciated! Thank you!
I tried this with 600k rows and the average time was
20.9 ms ± 1.35 ms
This will return a series with the rolling sum for the last 50 Change in the df:
df['Change'].rolling(50).sum()
you can add it to a new column like so:
df['change50'] = df['Change'].rolling(50).sum()
Disclaimer: This solution cannot compete with .rolling(). Plus, if a .groupby() case, just do a df.groupby("group")["Change"].rolling(50).sum() and then reset index. Therefore please accept the other answer.
Explicit for loop can be avoided by translating your recursive partial sum into the difference of cumulative sum (cumsum). The formula:
Sum[x-50:x] = Sum[:x] - Sum[:x-50] = Cumsum[x] - Cumsum[x-50]
Code
For showcase purpose, I have shorten len(df["Change"]) to 10 and SEQ_LEN to 5. A million records completed almost immediately in this way.
import pandas as pd
import numpy as np
# data
SEQ_LEN = 5
np.random.seed(111) # reproducibility
df = pd.DataFrame(
data={
"Change": np.random.normal(0, 1, 10) # a million rows
}
)
# step 1. Do cumsum
df["Change_Cumsum"] = df["Change"].cumsum()
# Step 2. calculate diff of cumsum: Sum[x-50:x] = Sum[:x] - Sum[:x-50]
df["Change_Sum"] = np.nan # or zero as you wish
df.loc[SEQ_LEN:, "Change_Sum"] = df["Change_Cumsum"].values[SEQ_LEN:] - df["Change_Cumsum"].values[:(-SEQ_LEN)]
# add idx=SEQ_LEN-1
df.at[SEQ_LEN-1, "Change_Sum"] = df.at[SEQ_LEN-1, "Change_Cumsum"]
Output
df
Out[30]:
Change Change_Cumsum Change_Sum
0 -1.133838 -1.133838 NaN
1 0.384319 -0.749519 NaN
2 1.496554 0.747035 NaN
3 -0.355382 0.391652 NaN
4 -0.787534 -0.395881 -0.395881
5 -0.459439 -0.855320 0.278518
6 -0.059169 -0.914489 -0.164970
7 -0.354174 -1.268662 -2.015697
8 -0.735523 -2.004185 -2.395838
9 -1.183940 -3.188125 -2.792244
I have a dataframe c with lots of different columns. Also, arr is a dataframe that corresponds to a subset of c: arr = c[c['A_D'] == 'A'].
The main idea of my code is to iterate over all rows in the c-dataframe and search for all the possible cases (in the arr dataframe) where some specific conditions should happen:
It is only necessary to iterate over rows were c['A_D'] == D and c['Already_linked'] == 0
The hour in the arr dataframe must be less than the hour_aux in the c dataframe
The column Already_linked of the arr dataframe must be zero: arr.Already_linked == 0
The Terminal and the Operator needs to be the same in the c and arr dataframe
Right now, the conditions are stored using both Boolean indexing and groupby get_group:
Groupby the arr dataframe in order to choose the same Operator and Terminal: g = groups.get_group((row.Operator, row.Terminal))
Choose only the arrivals where the hour is smaller than the hour in the c dataframe and where Already_linked==0: vb = g[(g.Already_linked==0) & (g.hour<row.hour_aux)]
For each of the rows in the c dataframe that verify all conditions, a vb dataframe is created. Naturally, this dataframe has different lengths in each iteration. After creating the vb dataframe, my goal is to choose the index of the vb dataframe that minimises the time between vb.START and c[x]. The FightID that corresponds to this index is then stored in the c dataframe on column a. Additionally, since the arrival was linked to a departure, the column Already_linked in the arr dataframe is changed from 0 to 1.
It is important to notice that the column Already_linked of the arr dataframe may change in every iteration (and arr.Already_linked == 0 is one of the conditions to create the vb dataframe). Therefore, it is not possible to parallelize this code.
I have already used c.itertuples() for efficiency, however since c has millions of rows, this code is still too time consuming.
Other option would also be to use pd.apply to every row. Nonetheless, this is not really straightforward since in each loop there are values that change in both c and arr (also, I believe that even with pd.apply it would be extremely slow).
Is there any possible way to convert this for loop in a vectorized solution (or decrease the running time by 10X(if possible even more) )?
Initial dataframe:
START END A_D Operator FlightID Terminal TROUND_ID tot
0 2017-03-26 16:55:00 2017-10-28 16:55:00 A QR QR001 4 QR002 70
1 2017-03-26 09:30:00 2017-06-11 09:30:00 D DL DL001 3 " " 84
2 2017-03-27 09:30:00 2017-10-28 09:30:00 D DL DL001 3 " " 78
3 2017-10-08 15:15:00 2017-10-22 15:15:00 D VS VS001 3 " " 45
4 2017-03-26 06:50:00 2017-06-11 06:50:00 A DL DL401 3 " " 9
5 2017-03-27 06:50:00 2017-10-28 06:50:00 A DL DL401 3 " " 19
6 2017-03-29 06:50:00 2017-04-19 06:50:00 A DL DL401 3 " " 3
7 2017-05-03 06:50:00 2017-10-25 06:50:00 A DL DL401 3 " " 32
8 2017-06-25 06:50:00 2017-10-22 06:50:00 A DL DL401 3 " " 95
9 2017-03-26 07:45:00 2017-10-28 07:45:00 A DL DL402 3 " " 58
Desired Output (some of the columns were excluded in the dataframe below. Only the a and Already_linked columns are relevant):
START END A_D Operator a Already_linked
0 2017-03-26 16:55:00 2017-10-28 16:55:00 A QR 0 1
1 2017-03-26 09:30:00 2017-06-11 09:30:00 D DL DL402 1
2 2017-03-27 09:30:00 2017-10-28 09:30:00 D DL DL401 1
3 2017-10-08 15:15:00 2017-10-22 15:15:00 D VS No_link_found 0
4 2017-03-26 06:50:00 2017-06-11 06:50:00 A DL 0 0
5 2017-03-27 06:50:00 2017-10-28 06:50:00 A DL 0 1
6 2017-03-29 06:50:00 2017-04-19 06:50:00 A DL 0 0
7 2017-05-03 06:50:00 2017-10-25 06:50:00 A DL 0 0
8 2017-06-25 06:50:00 2017-10-22 06:50:00 A DL 0 0
9 2017-03-26 07:45:00 2017-10-28 07:45:00 A DL 0 1
Code:
groups = arr.groupby(['Operator', 'Terminal'])
for row in c[(c.A_D == "D") & (c.Already_linked == 0)].itertuples():
try:
g = groups.get_group((row.Operator, row.Terminal))
vb = g[(g.Already_linked==0) & (g.hour<row.hour_aux)]
aux = (vb.START - row.x).abs().idxmin()
c.loc[row.Index, 'a'] = vb.loc[aux].FlightID
arr.loc[aux, 'Already_linked'] = 1
continue
except:
continue
c['Already_linked'] = np.where((c.a != 0) & (c.a != 'No_link_found') & (c.A_D == 'D'), 1, c['Already_linked'])
c.Already_linked.loc[arr.Already_linked.index] = arr.Already_linked
c['a'] = np.where((c.Already_linked == 0) & (c.A_D == 'D'),'No_link_found',c['a'])
Code for the initial c dataframe:
import numpy as np
import pandas as pd
import io
s = '''
A_D Operator FlightID Terminal TROUND_ID tot
A QR QR001 4 QR002 70
D DL DL001 3 " " 84
D DL DL001 3 " " 78
D VS VS001 3 " " 45
A DL DL401 3 " " 9
A DL DL401 3 " " 19
A DL DL401 3 " " 3
A DL DL401 3 " " 32
A DL DL401 3 " " 95
A DL DL402 3 " " 58
'''
data_aux = pd.read_table(io.StringIO(s), delim_whitespace=True)
data_aux.Terminal = data_aux.Terminal.astype(str)
data_aux.tot= data_aux.tot.astype(str)
d = {'START': ['2017-03-26 16:55:00', '2017-03-26 09:30:00','2017-03-27 09:30:00','2017-10-08 15:15:00',
'2017-03-26 06:50:00','2017-03-27 06:50:00','2017-03-29 06:50:00','2017-05-03 06:50:00',
'2017-06-25 06:50:00','2017-03-26 07:45:00'], 'END': ['2017-10-28 16:55:00' ,'2017-06-11 09:30:00' ,
'2017-10-28 09:30:00' ,'2017-10-22 15:15:00','2017-06-11 06:50:00' ,'2017-10-28 06:50:00',
'2017-04-19 06:50:00' ,'2017-10-25 06:50:00','2017-10-22 06:50:00' ,'2017-10-28 07:45:00']}
aux_df = pd.DataFrame(data=d)
aux_df.START = pd.to_datetime(aux_df.START)
aux_df.END = pd.to_datetime(aux_df.END)
c = pd.concat([aux_df, data_aux], axis = 1)
c['A_D'] = c['A_D'].astype(str)
c['Operator'] = c['Operator'].astype(str)
c['Terminal'] = c['Terminal'].astype(str)
c['hour'] = pd.to_datetime(c['START'], format='%H:%M').dt.time
c['hour_aux'] = pd.to_datetime(c['START'] - pd.Timedelta(15, unit='m'),
format='%H:%M').dt.time
c['start_day'] = c['START'].astype(str).str[0:10]
c['end_day'] = c['END'].astype(str).str[0:10]
c['x'] = c.START - pd.to_timedelta(c.tot.astype(int), unit='m')
c["a"] = 0
c["Already_linked"] = np.where(c.TROUND_ID != " ", 1 ,0)
arr = c[c['A_D'] == 'A']
While this is not a vecterized solution, it should speed things up rapidly if your sample data set mimics your true data set. Currently, you are wasting time looping over every row, but you only care about looping over rows where ['A_D'] == 'D' and ['Already_linked'] ==0. Instead remove the if's and loop over the truncated dataframe which is only 30% of the initial dataframe
for row in c[(c.A_D == 'D') & (c.Already_linked == 0)].itertuples():
vb = arr[(arr.Already_linked == 0) & (arr.hour < row.hour_aux)].copy().query(row.query_string)
try:
aux = (vb.START - row.x).abs().idxmin()
print(row.x)
c.loc[row.Index, 'a'] = vb.loc[aux,'FlightID']
arr.loc[aux, 'Already_linked'] = 1
continue
except:
continue
Your question was if there is a way to vectorize the for loop, but I think that question hides what you really want which is an easy way to speed your code up. For performance questions, a good starting point is always profiling. However, I have a strong suspicion that the dominant operation in your code is .query(row.query_string). Running that for every row is expensive if arr is large.
For arbitrary queries, that runtime can't really be improved at all without removing dependencies between iterations and parallelizing the expensive step. You might be a bit luckier though. Your query string always checks two different columns to see if they're equal to something you care about. However, for each row that requires going through your entire slice of arr. Since the slice changes each time, that could cause problems, but here are some ideas:
Since you're slicing arr each time anyway, maintain a view of just the arr.Already_Linked==0 rows so you're iterating over a smaller object.
Better yet, before you do any looping you should first group arr by Terminal and Operator. Then, instead of running through all of arr, first select the group you want and then do your slicing and filtering. This would require rethinking the exact implementation of query_string a little bit, but the advantage is that if you have a lot of terminals and operators you'll typically be working over a much smaller object than arr. Moreover, you wouldn't even have to query that object since that was implicitly done by the groupby.
Depending on how aux.hour typically relates to row.hour_aux, you might have improvements by sorting aux at the beginning with respect to hour. Just using the inequality operator you probably wouldn't see any gains, but you could pair that with a logarithmic search for the cutoff point and then just slice up to that cutoff point.
And so on. Again, I suspect any way of restructuring the query you're doing on all of arr for every row will offer substantially more gains than just switching frameworks or vectorizing bits and pieces.
Expanding on some of those points a little bit and adapting #DJK's code a bit, look at what happens when we have the following changes.
groups = arr.groupby(['Operator', 'Terminal'])
for row in c[(c.A_D == 'D') & (c.Already_linked == 0)].itertuples():
g = groups.get_group((row.Operator, row.Terminal))
vb = g[(g.Already_linked==0) & (g.hour<row.hour_aux)]
try:
aux = (vb.START - row.x).abs().idxmin()
print(row.x)
c.loc[row.Index, 'a'] = vb.loc[aux,'FlightID']
g.loc[aux, 'Already_linked'] = 1
continue
except:
continue
Part of the reason your query is so slow is because it's searching over all of arr each time. In contrast, the .groupby() executes in roughly the same time as one query, but then for every subsequent iteration you can just use .get_group() to efficiently find the tiny subset of the data you care about.
A helpful (extremely crude) rule of thumb when benchmarking is that a billion things takes a second. If you're seeing much longer times than that for something measured in millions of things, like your millions of rows, that means that for each of those rows you're doing tons of things to get up to billions of operations. That leaves a ton of potential for better algorithms to reduce the number of operations, whereas vectorization really only yields constant factor improvements (and for many string/query operations not even great improvements at that).
This solution uses pd.DataFrame.isin which uses numpy.in1d
Apparently 'isin' isn't necessarily faster for small datasets (like this sample), but is significantly faster for large datasets. You'll have to run it against your data to determine performance.
flight_record_linkage.ipynb
Expanded the dataset using c = pd.concat([c] * 10000, ignore_index=True)
Increase the dataset length by 3 orders of magnitude (10000 rows total).
Original method: Wall time: 8.98s
New method: Wall time: 16.4s
Increase the dataset length by 4 orders of magnitude (100000 rows total).
Original method: Wall time: 8min 17s
New method: Wall time: 1min 14s
Increase the dataset length by 5 orders of magnitude (1000000 rows total).
New method: Wall time: 11min 33s
New Method: Using isin and apply
def apply_do_g(it_row):
"""
This is your function, but using isin and apply
"""
keep = {'Operator': [it_row.Operator], 'Terminal': [it_row.Terminal]} # dict for isin combined mask
holder1 = arr[list(keep)].isin(keep).all(axis=1) # create boolean mask
holder2 = arr.Already_linked.isin([0]) # create boolean mask
holder3 = arr.hour < it_row.hour_aux # create boolean mask
holder = holder1 & holder2 & holder3 # combine the masks
holder = arr.loc[holder]
if not holder.empty:
aux = np.absolute(holder.START - it_row.x).idxmin()
c.loc[it_row.name, 'a'] = holder.loc[aux].FlightID # use with apply 'it_row.name'
arr.loc[aux, 'Already_linked'] = 1
def new_way_2():
keep = {'A_D': ['D'], 'Already_linked': [0]}
df_test = c[c[list(keep)].isin(keep).all(axis=1)].copy() # returns the resultant df
df_test.apply(lambda row: apply_do_g(row), axis=1) # g is multiple DataFrames"
#call the function
new_way_2()
Your problem looks like one of the most common problems in database operation. I do not fully understand what you want to get because you have not formulated the task. Now to the possible solution - avoid loops at all.
You have a very long table with columns time, FlightID, Operator, Terminal, A_D. Other columns and dates do not matter if I understand you correctly. Also start_time and end_time are the same in every row. By the way you may get time column with the code table.loc[:, 'time'] = table.loc[:, 'START'].dt.time.
table = table.drop_duplicates(subset=['time', 'FlightID', 'Operator', 'Terminal']). And your table gonna become significantly shorter.
Split table into table_arr and table_dep according to A_D value: table_arr = table.loc[table.loc[:, 'A_D'] == 'A', ['FlightID', 'Operator', 'Terminal', 'time']], table_dep = table.loc[table.loc[:, 'A_D'] == 'D', ['FlightID', 'Operator', 'Terminal', 'time']]
Seems like all you tried to get with loops you may get with a single line: table_result = table_arr.merge(table_dep, how='right', on=['Operator', 'Terminal'], suffixes=('_arr', '_dep')). It is basically the same operation as JOIN in SQL.
According to my understanding of your problem and having the tiny piece of data you have provided you get just the desired output (correspondence between FlightID_dep and FlightID_arr for all FlightID_dep values) without any loop so much faster. table_result is:
FlightID_arr Operator Terminal time_arr FlightID_dep time_dep
0 DL401 DL 3 06:50:00 DL001 09:30:00
1 DL402 DL 3 07:45:00 DL001 09:30:00
2 NaN VS 3 NaN VS001 15:15:00
Of course, in general case (with actual data) you will need one more step - filter table_result on condition time_arr < time_dep or any other condition you have. Unfortunately the data you have provided is not enough to fully solve your problem.
Complete code is:
import io
import pandas as pd
data = '''
START,END,A_D,Operator,FlightID,Terminal,TROUND_ID,tot
2017-03-26 16:55:00,2017-10-28 16:55:00,A,QR,QR001,4,QR002,70
2017-03-26 09:30:00,2017-06-11 09:30:00,D,DL,DL001,3,,84
2017-03-27 09:30:00,2017-10-28 09:30:00,D,DL,DL001,3,,78
2017-10-08 15:15:00,2017-10-22 15:15:00,D,VS,VS001,3,,45
2017-03-26 06:50:00,2017-06-11 06:50:00,A,DL,DL401,3,,9
2017-03-27 06:50:00,2017-10-28 06:50:00,A,DL,DL401,3,,19
2017-03-29 06:50:00,2017-04-19 06:50:00,A,DL,DL401,3,,3
2017-05-03 06:50:00,2017-10-25 06:50:00,A,DL,DL401,3,,32
2017-06-25 06:50:00,2017-10-22 06:50:00,A,DL,DL401,3,,95
2017-03-26 07:45:00,2017-10-28 07:45:00,A,DL,DL402,3,,58
'''
table = pd.read_csv(io.StringIO(data), parse_dates=[0, 1])
table.loc[:, 'time'] = table.loc[:, 'START'].dt.time
table = table.drop_duplicates(subset=['time', 'FlightID', 'Operator', 'Terminal'])
table_arr = table.loc[table.loc[:, 'A_D'] == 'A', ['FlightID', 'Operator', 'Terminal', 'time']]
table_dep = table.loc[table.loc[:, 'A_D'] == 'D', ['FlightID', 'Operator', 'Terminal', 'time']]
table_result = table_arr.merge(
table_dep,
how='right',
on=['Operator', 'Terminal'],
suffixes=('_arr', '_dep'))
print(table_result)
I'm trying to perform an action on grouped data in Pandas. For each group, I want to loop through the rows and compare them to the first row in the group. If conditions are met, then I want to print out the row details. My data looks like this:
Orig Dest Route Vol Per VolPct
ORD ICN A 2,251 0.64 0.78
ORD ICN B 366 0.97 0.13
ORD ICN C 142 0.14 0.05
DCA FRA A 9,059 0.71 0.85
DCA FRA B 1,348 0.92 0.13
DCA FRA C 281 0.8 0.03
My groups are Orig, Dest pairs. If a row in the group other than the first row has a Per greater than the first row and a VolPct greater than .1, I want to output the grouped pair and the route. In this example, the output would be:
ORD ICN B
DCA FRA B
My attempted code is as follows:
for lane in otp.groupby(otp['Orig','Dest']):
X = lane.first(['Per'])
for row in lane:
if (row['Per'] > X and row['VolPct'] > .1):
print(row['Orig','Dest','Route'])
However, this isn't working so I'm obviously not doing something right. I'm also not sure how to tell Python to ignore the first row when in the "row in lane" loop. Any ideas? Thanks!
You are pretty close as it is.
First, you are calling groupby incorrectly. You should just pass a list of the column names instead of a DataFrame object. So, instead of otp.groupby(otp['Orig','Dest']) you should use otp.groupby(['Orig','Dest']).
Once you are looping through the groups you will hit more issues. A group in a groupby object is actually a tuple. The first item in that tuple is the grouping key and the second is the grouped data. For example your first group would be the following tuple:
(('DCA', 'FRA'), Orig Dest Route Vol Per VolPct
3 DCA FRA A 9,059 0.71 0.85
4 DCA FRA B 1,348 0.92 0.13
5 DCA FRA C 281 0.80 0.03)
You will need to change the way you set X to reflect this. For example, X = lane.first(['Per']) should become X = lane[1].iloc[0].Per. After that you only have a minor errors in the way you iterate through the rows and access multiple columns in a row. To wrap it all up your loop should be something like so:
for key, lane in otp.groupby(otp['Orig','Dest']):
X = lane.iloc[0].Per
for idx, row in lane.iterrows():
if (row['Per'] > X and row['VolPct'] > .1):
print(row[['Orig','Dest','Route']])
Note that I use iterrows to iterate through the rows, and I use double brackets when accessing multiple columns in a DataFrame.
You don't really need to tell pandas to ignore the first row in each group as it should never trigger your if statement, but if you did want to skip it you could use lane[1:].iterrows().
I'm trying to find a way to iterate code for a linear regression over many many columns, upwards of Z3. Here is a snippet of the dataframe called df1
Time A1 A2 A3 B1 B2 B3
1 1.00 6.64 6.82 6.79 6.70 6.95 7.02
2 2.00 6.70 6.86 6.92 NaN NaN NaN
3 3.00 NaN NaN NaN 7.07 7.27 7.40
4 4.00 7.15 7.26 7.26 7.19 NaN NaN
5 5.00 NaN NaN NaN NaN 7.40 7.51
6 5.50 7.44 7.63 7.58 7.54 NaN NaN
7 6.00 7.62 7.86 7.71 NaN NaN NaN
This code returns the slope coefficient of a linear regression for the very ONE column only and concatenates the value to a numpy series called series, here is what it looks like for extracting the slope for the first column:
from sklearn.linear_model import LinearRegression
series = np.array([]) #blank list to append result
df2 = df1[~np.isnan(df1['A1'])] #removes NaN values for each column to apply sklearn function
df3 = df2[['Time','A1']]
npMatrix = np.matrix(df3)
X, Y = npMatrix[:,0], npMatrix[:,1]
slope = LinearRegression().fit(X,Y) # either this or the next line
m = slope.coef_[0]
series= np.concatenate((SGR_trips, m), axis = 0)
As it stands now, I am using this slice of code, replacing "A1" with a new column name all the way up to "Z3" and this is extremely inefficient. I know there are many easy way to do this with some modules but I have the drawback of having all these intermediate NaN values in the timeseries so it seems like I'm limited to this method, or something like it.
I tried using a for loop such as:
for col in df1.columns:
and replacing 'A1', for example with col in the code, but this does not seem to be working.
Is there any way I can do this more efficiently?
Thank you!
One liner (or three)
time = df[['Time']]
pd.DataFrame(np.linalg.pinv(time.T.dot(time)).dot(time.T).dot(df.fillna(0)),
['Slope'], df.columns)
Broken down with a bit of explanation
Using the closed form of OLS
In this case X is time where we define time as df[['Time']]. I used the double brackets to preserve the dataframe and its two dimensions. If I'd done single brackets, I'd have gotten a series and its one dimension. Then the dot products aren't as pretty.
is np.linalg.pinv(time.T.dot(time)).dot(time.T)
Y is df.fillna(0). Yes, we could have done one column at a time, but why when we could do it altogether. You have to deal with the NaNs. How would you imagine dealing with them? Only doing it over the time you had data? That is equivalent to placing zeroes in the NaN spots. So, I did.
Finally, I use pd.DataFrame(stuff, ['Slope'], df.columns) to contain all slopes in one place with the original labels.
Note that I calculated the slope of the regression for Time against itself. Why not? It was there. Its value is 1.0. Great! I probably did it right!
Looping is a decent strategy for a modest number (say, fewer than thousands) of columns. Without seeing your implementation, I can't say what's wrong, but here's my version, which works:
slopes = []
for c in cols:
if c=="Time": break
mask = ~np.isnan(df1[c])
x = np.atleast_2d(df1.Time[mask].values).T
y = np.atleast_2d(df1[c][mask].values).T
reg = LinearRegression().fit(x, y)
slopes.append(reg.coef_[0])
I've simplified your code a bit to avoid creating so many temporary DataFrame objects, but it should work fine your way too.