I have been strugling with an optimization problem with Pandas.
I had developed a script to apply computation on every line of a relatively small DataFrame (~a few 1000s lines, a few dozen columns).
I relied heavily on the apply() function which was obviously a poor choice in most cases.
After a round of optimization I only have a method which takes time and I haven't found an easy solution for :
Basically my dataframe contains a list of video viewing statistics with the number of people who watched the video for every quartile (how many have watched 0%, 25%, 50%, etc) such as :
video_name
video_length
video_0
video_25
video_50
video_75
video_100
video_1
6
1000
500
300
250
5
video_2
30
1000
500
300
250
5
I am trying to interpolate the statistics to be able to answer "how many people would have watched each quartile of the video if it lasted X seconds"
Right now my function takes the dataframe and a "new_length" parameter, and calls apply() on each line.
The function which handles each line computes the time marks for each quartile (so 0, 7.5, 15, 22.5 and 30 for the 30s video), and time marks for each quartile given the new length (so to reduce the 30s video to 6s, the new time marks would be 0, 1.5, 3, 4.5 and 6).
I build a dataframe containing the time marks as index, and the stats as values in the first column:
index (time marks)
view_stats
0
1000
7.5
500
15
300
22.5
250
30
5
1.5
NaN
3
NaN
4.5
NaN
I then call DataFrame.interpolate(method="index") to fill the NaN values.
It works and gives me the result I expect, but it is taking a whopping 11s for a 3k lines dataframe and I believe it has to do with the use of the apply() method combined with the creation of a new dataframe to interpolate the data for each line.
Is there an obvious way achieve the same result "in place", e.g by avoiding the apply / new dataframe method, directly on the original dataframe ?
EDIT: The expected output when calling the function with 6 as the new length parameter would be :
video_name
video_length
video_0
video_25
video_50
video_75
video_100
new_video_0
new_video_25
new_video_50
new_video_75
new_video_100
video_1
6
1000
500
300
250
5
1000
500
300
250
5
video_2
6
1000
500
300
250
5
1000
900
800
700
600
The first line would be untouched because the video is already 6s long.
In the second line, the video would be cut from 30s to 6s so the new quartiles would be at 0, 1.5, 3, 4.5, 6s and the stats would be interpolated between 1000 and 500, which were the values at the old 0% and 25% time marks
EDIT2: I do not care if I need to add temporary columns, time is an issue, memory is not.
As a reference, this is my code :
def get_value(marks, asset, mark_index) -> int:
value = marks["count"][asset["new_length_marks"][mark_index]]
if isinstance(value, pandas.Series):
res = value.iloc(0)
else:
res = value
return math.ceil(res)
def length_update_row(row, assets, **kwargs):
asset_name = row["asset_name"]
asset = assets[asset_name]
# assets is a dict containing the list of files and the old and "new" video marks
# pre-calculated
marks = pandas.DataFrame(data=[int(row["video_start"]), int(row["video_25"]), int(row["video_50"]), int(row["video_75"]), int(row["video_completed"])],
columns=["count"],
index=asset["old_length_marks"])
marks = marks.combine_first(pandas.DataFrame(data=NaN, columns=["count"], index=asset["new_length_marks"][1:]))
marks = marks.interpolate(method="index")
row["video_25"] = get_value(marks, asset, 1)
row["video_50"] = get_value(marks, asset, 2)
row["video_75"] = get_value(marks, asset, 3)
row["video_completed"] = get_value(marks, asset, 4)
return row
def length_update_stats(report: pandas.DataFrame,
assets: dict) -> pandas.DataFrame:
new_report = new_report.apply(lambda row: length_update_row(row, assets), axis=1)
return new_report
IIUC, you could use np.interp:
# get the old x values
xs = df['video_length'].values[:, None] * [0, 0.25, 0.50, 0.75, 1]
# the corresponding y values
ys = df.iloc[:, 2:].values
# note that 6 is the new value
nxs = np.repeat(np.array(6), 2)[:, None] * [0, 0.25, 0.50, 0.75, 1]
res = pd.DataFrame(data=np.array([np.interp(nxi, xi, yi) for nxi, xi, yi in zip(nxs, xs, ys)]), columns="new_" + df.columns[2:] )
print(res)
Output
new_video_0 new_video_25 new_video_50 new_video_75 new_video_100
0 1000.0 500.0 300.0 250.0 5.0
1 1000.0 900.0 800.0 700.0 600.0
And then concat across the second axis:
output = pd.concat((df, res), axis=1)
print(output)
Output (concat)
video_name video_length video_0 ... new_video_50 new_video_75 new_video_100
0 video_1 6 1000 ... 300.0 250.0 5.0
1 video_2 30 1000 ... 800.0 700.0 600.0
[2 rows x 12 columns]
Related
I have a dataframe that has the following columns:
Acct Num, Correspondence Date, Open Date
For each opened account, I am being asked to look back at all the correspondences that happened within
30 days of opendate of that account, then assigning points as following to the correspondences:
Forty-twenty-forty: Attribute 40% (0.4 points) of the attribution to the first touch,
40% to the last touch, and divide the remaining 20% between all touches in between
So I know apply and group by functions, but this is beyond my paygrade.
I have to group by account, with conditional based on comparison of 2 columns against eachother,
I have to do that to get a total number of correspondences, and I guess they have to be sorted as well, as the following step of assigning points to correspondences depends on the order in which they occurred.
I would like to do this efficiently, as I have a ton of rows, I know apply() can go fast, but I am pretty bad at applying it when the row-level operation I am trying to do gets even a little complex.
I appreciate any help, as I am not good at pandas.
EDIT
as per request
Acct, ContactDate, OpenDate, Points (what I need to calculate)
123, 1/1/2018, 1/1/2021, 0 (because correspondance not within 30 days of open)
123, 12/10/2020, 1/1/2021, 0.4 (first touch gets 0.4)
123, 12/11/2020, 1/1/2021, 0.2 (other 'touches' get 0.2/(num of touches-2) 'points')
123, 12/12/2020, 1/1/2021, 0.4 (last touch gets 0.4)
456, 1/1/2018, 1/1/2021, 0 (again, because correspondance not within 30 days of open)
456, 12/10/2020, 1/1/2021, 0.4 (first touch gets 0.4)
456, 12/11/2020, 1/1/2021, 0.1 (other 'touches' get 0.2/(num of touches-2) 'points')
456, 12/11/2020, 1/1/2021, 0.1 (other 'touches' get 0.2/(num of touches-2) 'points')
456, 12/12/2020, 1/1/2021, 0.4 (last touch gets 0.4)
This returns a reduced dataframe in that it excludes timeframes exceeding 30 days and then merges the original df into it get all the data in one df. This assumes your date sorting is correct, otherwise, you may have to do that upfront before applying the function below.
df['Points'] = 0 #add column to dataframe before analysis
#df.columns
#Index(['Acct', 'ContactDate', 'OpenDate', 'Points'], dtype='object')
def points(x):
newx = x.loc[(x['OpenDate'] - x['ContactDate']) <= timedelta(days=30)] # reduce for wide > 30 days
# print(newx.Acct)
if newx.Acct.count() > 2: # check more than two dates exist
newx['Points'].iloc[0] = .4 # first row
newx['Points'].iloc[-1] = .4 # last row
newx['Points'].iloc[1:-1] = .2 / newx['Points'].iloc[1:-1].count() # middle rows / by count of those rows
return newx
elif newx.Acct.count() == 2: # placeholder for later
#edge case logic here for two occurences
return newx
elif newx.Acct.count() == 1: # placeholder for later
#edge case logic here one onccurence
return newx
# groupby Acct then clean up the indices so it can be merged back into original df
dft = df.groupby('Acct', as_index=False).apply(points).reset_index().set_index('level_1').drop('level_0', axis=1)
# merge on index
df_points = df[['Acct', 'ContactDate', 'OpenDate']].merge(dft['Points'], how='left', left_index=True, right_index=True).fillna(0)
Output:
Acct ContactDate OpenDate Points
0 123 2018-01-01 2021-01-01 0.0
1 123 2020-12-10 2021-01-01 0.4
2 123 2020-12-11 2021-01-01 0.2
3 123 2020-12-12 2021-01-01 0.4
4 456 2018-01-01 2021-01-01 0.0
5 456 2020-12-10 2021-01-01 0.4
6 456 2020-12-11 2021-01-01 0.1
7 456 2020-12-11 2021-01-01 0.1
8 456 2020-12-12 2021-01-01 0.4
I am working with panel time-series data and am struggling with creating a fast for loop, to sum up, the past 50 numbers at the current i. The data is like 600k rows, and it starts to churn around 30k. Is there a way to use pandas or Numpy to do the same at a fraction of the time?
The change column is of type float, with 4 decimals.
Index Change
0 0.0410
1 0.0000
2 0.1201
... ...
74327 0.0000
74328 0.0231
74329 0.0109
74330 0.0462
SEQ_LEN = 50
for i in range(SEQ_LEN, len(df)):
df.at[i, 'Change_Sum'] = sum(df['Change'][i-SEQ_LEN:i])
Any help would be highly appreciated! Thank you!
I tried this with 600k rows and the average time was
20.9 ms ± 1.35 ms
This will return a series with the rolling sum for the last 50 Change in the df:
df['Change'].rolling(50).sum()
you can add it to a new column like so:
df['change50'] = df['Change'].rolling(50).sum()
Disclaimer: This solution cannot compete with .rolling(). Plus, if a .groupby() case, just do a df.groupby("group")["Change"].rolling(50).sum() and then reset index. Therefore please accept the other answer.
Explicit for loop can be avoided by translating your recursive partial sum into the difference of cumulative sum (cumsum). The formula:
Sum[x-50:x] = Sum[:x] - Sum[:x-50] = Cumsum[x] - Cumsum[x-50]
Code
For showcase purpose, I have shorten len(df["Change"]) to 10 and SEQ_LEN to 5. A million records completed almost immediately in this way.
import pandas as pd
import numpy as np
# data
SEQ_LEN = 5
np.random.seed(111) # reproducibility
df = pd.DataFrame(
data={
"Change": np.random.normal(0, 1, 10) # a million rows
}
)
# step 1. Do cumsum
df["Change_Cumsum"] = df["Change"].cumsum()
# Step 2. calculate diff of cumsum: Sum[x-50:x] = Sum[:x] - Sum[:x-50]
df["Change_Sum"] = np.nan # or zero as you wish
df.loc[SEQ_LEN:, "Change_Sum"] = df["Change_Cumsum"].values[SEQ_LEN:] - df["Change_Cumsum"].values[:(-SEQ_LEN)]
# add idx=SEQ_LEN-1
df.at[SEQ_LEN-1, "Change_Sum"] = df.at[SEQ_LEN-1, "Change_Cumsum"]
Output
df
Out[30]:
Change Change_Cumsum Change_Sum
0 -1.133838 -1.133838 NaN
1 0.384319 -0.749519 NaN
2 1.496554 0.747035 NaN
3 -0.355382 0.391652 NaN
4 -0.787534 -0.395881 -0.395881
5 -0.459439 -0.855320 0.278518
6 -0.059169 -0.914489 -0.164970
7 -0.354174 -1.268662 -2.015697
8 -0.735523 -2.004185 -2.395838
9 -1.183940 -3.188125 -2.792244
I tried to look at some reference where I can make an extra column that is categorical based on another column. I tried the documentation already pandas categorical, and stack overflow does not seem to have this, but I think it must be, but maybe I am using the wrong search tags?
for example
Size Size_cat
10 0-50
50 0-50
150 50-500
450 50-500
5000 1000-9000
10000 >9000
notice that the size category 500-1000 is missing (but no number corresponds to that)
The problem lies here is that I create a pandas crosstable later like this:
summary_table = pd.crosstab(index[res_sum["Type"],res_sum["Size"]],columns=[res_sum["Found"]],margins=True)
summary_table = summary_table.div(summary_table["All"] / 100, axis=0)
After some editing of this table I get this kind of result:
Found Exact Near No
Type Size
DEL 50 80 20 0
100 60 40 0
500 80 20 0
1000 60 40 0
5000 40 60 0
10000 20 80 0
DEL_Total 56.666667 43.333333 0
DUP 50 0 0 100
100 0 0 100
500 0 100 0
1000 0 100 0
5000 0 100 0
10000 20 80 0
DUP_Total 3.333333 63.333333 33.333333
the problem is that now (Size) just puts the sizes here, and therefore this table can vary in size. If 5000-DEL is missing in the data, that column will also disappear and then DUP has 6 categories and DEL 5. Additionally if I add more sizes, this table will become very large. So I wanted to make categories of the sizes, but always retaining the same categories, even if some of them are empty.
I hope I am clear, because it is kinda hard to explain.
this is what I tried already:
highest_size = res['Size'].max()
categories = int(math.ceil(highest_size / 100.0) * 100.0)
categories = int(categories / 10)
labels = ["{0} - {1}".format(i, i + categories) for i in range(0, highest_size, categories)]
print(highest_size)
print(categories)
print(labels)
10000
1000
['0 - 1000', '1000 - 2000', '2000 - 3000', '3000 - 4000', '4000 - 5000', '5000 - 6000', '6000 - 7000', '7000 - 8000', '8000 - 9000', '9000 - 10000']
I get number categories, but of course now they depend on the highest number, and the categories change based on the data. additionally I still need to link them to the 'Size' column in pandas. This does not work.
df['group'] = pd.cut(df.value, range(0, highest_size), right=False, labels=labels)
If possible I would like to make my own categories, instead of using range to get the same steps, like I made in the first example above. (otherwise it takes way to long to get to 10000 with steps of 100, and taking steps of 1000 will lose a lot of data in the smaller regions)
See a mock up below, to help you get the logic. Basically, you bin the Score into custom groups, by using cut (or even lambda or map ) and passing the value to the function GroupMapping. Let me know if it works.
import pandas as pd
df=pd.DataFrame({
'Name':['Harry','Sally','Mary','John','Francis','Devon','James','Holly','Molly','Nancy','Ben'],
'Score': [1143,2040,2500,3300,3143,2330,2670,2140,2890,3493,1723]}
)
def GroupMapping(dl):
if int(dl) <= 1000: return '0-1000'
elif 1000 < dl <= 2000: return '1000 - 2000'
elif 2000 < dl <= 3000: return '2000 - 3000'
elif 3000 < dl <= 4000: return '3000 - 4000'
else: return 'None'
#df["Group"] = df['Score'].map(GroupMapping)
#df["Group"] = df['Score'].apply(lambda row: GroupMapping(row))
df['Group'] = pd.cut(df['Score'], [0, 1000, 2000, 3000, 4000], labels=['0-1000', '1000 - 2000', '2000 - 3000','3000 - 4000' ])
df
I have a program that ideally measures the temperature every second. However, in reality this does not happen. Sometimes, it skips a second or it breaks down for 400 seconds and then decides to start recording again. This leaves gaps in my 2-by-n dataframe, where ideally n = 86400 (the amount of seconds in a day). I want to apply some sort of moving/rolling average to it to get a nicer plot, but if I do that to the "raw" datafiles, the amount of data points becomes less. This is shown here, watch the x-axis. I know the "nice data" doesn't look nice yet; I'm just playing with some values.
So, I want to implement a data cleaning method, which adds data to the dataframe. I thought about it, but don't know how to implement it. I thought of it as follows:
If the index is not equal to the time, then we need to add a number, at time = index. If this gap is only 1 value, then the average of the previous number and the next number will do for me. But if it is bigger, say 100 seconds are missing, then a linear function needs to be made, which will increase or decrease the value steadily.
So I guess a training set could be like this:
index time temp
0 0 20.10
1 1 20.20
2 2 20.20
3 4 20.10
4 100 22.30
Here, I would like to get a value for index 3, time 3 and the values missing between time = 4 and time = 100. I'm sorry about my formatting skills, I hope it is clear.
How would I go about programming this?
Use merge with complete time column and then interpolate:
# Create your table
time = np.array([e for e in np.arange(20) if np.random.uniform() > 0.6])
temp = np.random.uniform(20, 25, size=len(time))
temps = pd.DataFrame([time, temp]).T
temps.columns = ['time', 'temperature']
>>> temps
time temperature
0 4.0 21.662352
1 10.0 20.904659
2 15.0 20.345858
3 18.0 24.787389
4 19.0 20.719487
The above is a random table generated with missing time data.
# modify it
filled = pd.Series(np.arange(temps.iloc[0,0], temps.iloc[-1, 0]+1))
filled = filled.to_frame()
filled.columns = ['time'] # Create a fully filled time column
merged = pd.merge(filled, temps, on='time', how='left') # merge it with original, time without temperature will be null
merged.temperature = merged.temperature.interpolate() # fill nulls linearly.
# Alternatively, use reindex, this does the same thing.
final = temps.set_index('time').reindex(np.arange(temps.time.min(),temps.time.max()+1)).reset_index()
final.temperature = final.temperature.interpolate()
>>> merged # or final
time temperature
0 4.0 21.662352
1 5.0 21.536070
2 6.0 21.409788
3 7.0 21.283505
4 8.0 21.157223
5 9.0 21.030941
6 10.0 20.904659
7 11.0 20.792898
8 12.0 20.681138
9 13.0 20.569378
10 14.0 20.457618
11 15.0 20.345858
12 16.0 21.826368
13 17.0 23.306879
14 18.0 24.787389
15 19.0 20.719487
First you can set the second values to actual time values as such:
df.index = pd.to_datetime(df['time'], unit='s')
After which you can use pandas' built-in time series operations to resample and fill in the missing values:
df = df.resample('s').interpolate('time')
Optionally, if you still want to do some smoothing you can use the following operation for that:
df.rolling(5, center=True, win_type='hann').mean()
Which will smooth with a 5 element wide Hanning window. Note: any window-based smoothing will cost you value points at the edges.
Now your dataframe will have datetimes (including date) as index. This is required for the resample method. If you want to lose the date, you can simply use:
df.index = df.index.time
A quick rundown of my goal:
I am trying to make a DataFrame that contains arrays of cashflow payments. Rows are m number of loans and columns are n number of dates, and values are the associated payments on those dates, if any. My current approach is to generate the m x n DataFrame, then find each cashflow on each date of each loan and set the corresponding section of the DataFrame to that value.
cashflow_frame = pd.DataFrame(columns = all_dates, index = all_ids)
I currently have a for loop that does what I want, but takes much too long to execute. I've line profiled the code:
Timer unit: 1e-06 s
Total time: 38.6231 s
Line # Hits Time Per Hit % Time Line Contents
==============================================================
2 def frameMaker():
3 3642 1208212 331.7 3.1 for ids, slices in id_grouped:
4 3641 3542 1.0 0.0 data_slice = slices
5 3641 17040 4.7 0.0 original_index = data_slice.index.values
6 3641 583252 160.2 1.5 funded_amt = -data_slice.ix[original_index[0],'outstanding_princp_beg']
7 3641 2091958 574.6 5.4 issue_d = data_slice.ix[original_index[0], 'issue_d']
8 3641 346722 95.2 0.9 pmt_date_ranges = data_slice['date'].values
9 3641 101051 27.8 0.3 date_ranges = np.append(issue_d, pmt_date_ranges)
10 3641 310452 85.3 0.8 rest_cfs = data_slice['pmt_amt_received'].values
11 3641 50856 14.0 0.1 cfs = np.append(funded_amt, rest_cfs)
12
13 3641 321601 88.3 0.8 if data_slice.ix[original_index[-1], 'charged_off_recovs'] > 0:
14 412 6094 14.8 0.0 cfs[-1] = (data_slice.ix[original_index[-1], 'charged_off_recovs'] -
15 412 35943 87.2 0.1 data_slice.ix[original_index[-1], 'charged_off_fees'])
16
17 3641 33546392 9213.5 86.9 cashflow_frame.ix[ids,date_ranges] = cfs
so I can see that setting the arrays in the DataFrame is the slowest. I've also noticed that it gets exponentially slower/takes a larger % Time as I have more loans. Why does it get slower and slower? What is a faster way to set values in a DataFrame? Is there a way to vectorize the operations?
I'm considering generating arrays of equal length for each loan and then creating the cashflow_frame from a (dict[loan ids] = [cashflows]), but would like to use my original code if there is a way to speed it up significantly.
More details:
http://imgur.com/a/ptgbZ
id_grouped is the top DataFrame grouped by 'id'. Data is read in from csv.
My code makes the lower DataFrame, which is exactly how I want it, but it takes much too long.
The first DataFrame is 8.5m rows with about 640,000 loan ids.