I have this dataframe:
sales
0 22.000000
1 25.000000
2 22.000000
3 18.000000
4 23.000000
and want to 'forecast' the next value as a simple numeric to pass it on easily to another code. I thought numpy.polyfit could do that, but not sure how. I am just after something simple.
If numpy.polyfit can do a curve fit, the curve can continue to the next x value. This would give a forecast like in excel. That's good for me.
Unfortunately, numpy.org doesn't say how forecasting is possible, hence asking here :)
Forecasting can be done by using np.poly1d(z) as described in the documentation.
n = 10 # number of values you would like to forecast
deg = 1 # degree of polynomial
df = pd.DataFrame([22, 25, 22, 18, 23], columns=['Sales'])
z = np.polyfit(df.index, df.Sales, deg)
p = np.poly1d(z)
for row in range(len(df), len(df) + n):
df.loc[row, 'Sales'] = p(row)
You can change n and deg as you like. But remember, you only have 5 observations so deg must be 4 or lower.
Related
I have a dataset storing marathon segment splits (5K, 10K, ...) in seconds and identifiers (age, gender, country) as columns and individuals as rows. Each cell for a marathon segment split column may contain either a float (specifying the number of seconds required to reach the segment) or "NaN". A row may contain up to 4 NaN values. Here is some sample data:
Age M/F Country 5K 10K 15K 20K Half Official Time
2323 38 M CHI 1300.0 2568.0 3834.0 5107.0 5383.0 10727.0
2324 23 M USA 1286.0 2503.0 3729.0 4937.0 5194.0 10727.0
2325 36 M USA 1268.0 2519.0 3775.0 5036.0 5310.0 10727.0
2326 37 M POL 1234.0 2484.0 3723.0 4972.0 5244.0 10727.0
2327 32 M DEN NaN 2520.0 3782.0 5046.0 5319.0 10728.0
I intend to calculate a best fit line for marathon split times (using only the columns between "5K" to "Half") for each row with at least one NaN; from the best fit line for the row, I want to impute a data point to replace the NaN with.
From the sample data, I intend to calculate a best fit line for row 2327 only (using values 2520.0, 3782.0, 5046.0, and 5319.0). Using this best fit line for row 2327, I intend to replace the NaN 5K time with the predicted 5K time.
How can I calculate this best fit line for each row with NaN?
Thanks in advance.
I "extrapolated" a solution from here from 2015 https://stackoverflow.com/a/31340344/6366770 (pun intended). Extrapolation definition I am not sure if in 2021 pandas has reliable extrapolation methods, so you might have to use scipy or other libraries.
When doing the Extrapolation , I excluded the "Half" column. That's because the running distances of 5K, 10K, 15K and 20K are 100% linear. It is literally a straight line if you exclude the half marathon column. But, that doesn't mean that expected running times are linear. Obviously, as you run a longer distance your average time per kilometer is lower. But, this "gets the job done" without getting too involved in an incredibly complex calculation.
Also, this is worth noting. Let's say that the first column was 1K instead of 5K. Then, this method would fail. It only works because the distances are linear. If it was 1K, you would also have to use the data from the rows of the other runners, unless you were making calculations based off the kilometers in the column names themselves. Either way, this is an imperfect solution, but much better than pd.interpolation. I linked another potential solution in the comments of tdy's answer.
import scipy as sp
import pandas as pd
# we focus on the four numeric columns from 5K-20K and and Transpose the dataframe, since we are going horizontally across columns. T
#T he index must also be numeric, so we drop it, but don't worry, we add back just the numbers and maintain the index later on.
df_extrap = df.iloc[:,4:8].T.reset_index(drop=True)
# create a scipy interpolation function to be called by a custom extrapolation function later on
def scipy_interpolate_func(s):
s_no_nan = s.dropna()
return sp.interpolate.interp1d(s_no_nan.index.values, s_no_nan.values, kind='linear', bounds_error=False)
def my_extrapolate_func(scipy_interpolate_func, new_x):
x1, x2 = scipy_interpolate_func.x[0], scipy_interpolate_func.x[-1]
y1, y2 = scipy_interpolate_func.y[0], scipy_interpolate_func.y[-1]
slope = (y2 - y1) / (x2 - x1)
return y1 + slope * (new_x - x1)
#Concat each extrapolated column altogether and transpose back to initial shape to be added to the original dataframe
s_extrapolated = pd.concat([pd.Series(my_extrapolate_func(scipy_interpolate_func(df_extrap[s]),
df_extrap[s].index.values),
index=df_extrap[s].index) for s in df_extrap.columns], axis=1).T
cols = ['5K', '10K', '15K', '20K']
df[cols] = s_extrapolated
df
Out[1]:
index Age M/F Country 5K 10K 15K 20K Half \
0 2323 38 M CHI 1300.0 2569.0 3838.0 5107.0 5383.0
1 2324 23 M USA 1286.0 2503.0 3720.0 4937.0 5194.0
2 2325 36 M USA 1268.0 2524.0 3780.0 5036.0 5310.0
3 2326 37 M POL 1234.0 2480.0 3726.0 4972.0 5244.0
4 2327 32 M DEN 1257.0 2520.0 3783.0 5046.0 5319.0
Official Time
0 10727.0
1 10727.0
2 10727.0
3 10727.0
4 10728.0
I have a large dataset of test results where I have a columns to represent the date a test was completed and number of hours it took to complete the test i.e.
df = pd.DataFrame({'Completed':['21/03/2020','22/03/2020','21/03/2020','24/03/2020','24/03/2020',], 'Hours_taken':[23,32,8,73,41]})
I have a months worth of test data and the tests can take anywhere from a couple of hours to a couple of days. I want to try and work out, for each day, what percentage of tests fall within the ranges of 24hrs/48hrs/72hrs ect. to complete, up to the percentage of tests that took longer than a week.
I've been able to work it out generally without taking the dates into account like so:
Lab_tests['one-day'] = Lab_tests['hours'].between(0,24)
Lab_tests['two-day'] = Lab_tests['hours'].between(24,48)
Lab_tests['GreaterThanWeek'] = Lab_tests['hours'] >168
one = Lab_tests['1-day'].value_counts().loc[True]
two = Lab_tests['two-day'].value_counts().loc[True]
eight = Lab_tests['GreaterThanWeek'].value_counts().loc[True]
print(one/10407 * 100)
print(two/10407 * 100)
print(eight/10407 * 100)
Ideally I'd like to represent the percentages in another dataset where the rows represent the dates and the columns represent the data ranges. But I can't work out how to take what I've done and modify it to get these percentages for each date. Is this possible to do in pandas?
This question, Counting qualitative values based on the date range in Pandas is quite similar but the fact that I'm counting the occurrences in specified ranges is throwing me off and I haven't been able to get a solution out of it.
Bonus Question
I'm sure you've noticed my current code is not the most elegant thing in the world, is the a cleaner way to do what I've done above, as I'm doing that for every data range that I want?
Edit:
So the Output for the sample data given would look like so:
df = pd.DataFrame({'1-day':[100,0,0,0], '2-day':[0,100,0,50],'3-day':[0,0,0,0],'4-day':[0,0,0,50]},index=['21/03/2020','22/03/2020','23/03/2020','24/03/2020'])
You're almost there. You just need to do a few final steps:
First, cast your bools to ints, so that you can sum them.
Lab_tests['one-day'] = Lab_tests['hours'].between(0,24).astype(int)
Lab_tests['two-day'] = Lab_tests['hours'].between(24,48).astype(int)
Lab_tests['GreaterThanWeek'] = (Lab_tests['hours'] > 168).astype(int)
Completed hours one-day two-day GreaterThanWeek
0 21/03/2020 23 1 0 0
1 22/03/2020 32 0 1 0
2 21/03/2020 8 1 0 0
3 24/03/2020 73 0 0 0
4 24/03/2020 41 0 1 0
Then, drop the hours column and roll the rest up to the level of Completed:
Lab_tests['one-day'] = Lab_tests['hours'].between(0,24).astype(int)
Lab_tests['two-day'] = Lab_tests['hours'].between(24,48).astype(int)
Lab_tests['GreaterThanWeek'] = (Lab_tests['hours'] > 168).astype(int)
Lab_tests.drop('hours', axis=1).groupby('Completed').sum()
one-day two-day GreaterThanWeek
Completed
21/03/2020 2 0 0
22/03/2020 0 1 0
24/03/2020 0 1 0
EDIT: To get to percent, you just need to divide each column by the sum of all three. You can sum columns by defining the axis of the sum:
...
daily_totals = Lab_tests.drop('hours', axis=1).groupby('Completed').sum()
daily_totals.sum(axis=1)
Completed
21/03/2020 2
22/03/2020 1
24/03/2020 1
dtype: int64
Then divide the daily totals dataframe by the column-wise sum of the daily totals (again, we use axis to define whether each value of the series will be the divisor for a row or a column.):
daily_totals.div(daily_totals.sum(axis=1), axis=0)
one-day two-day GreaterThanWeek
Completed
21/03/2020 1.0 0.0 0.0
22/03/2020 0.0 1.0 0.0
24/03/2020 0.0 1.0 0.0
I have a Pandas Series that contains the price evolution of a product (my country has high inflation), or say, the amount of coronavirus infected people in a certain country. The values in both of these datasets grows exponentially; that means that if you had something like [3, NaN, 27] you'd want to interpolate so that the missing value is filled with 9 in this case. I checked the interpolation method in the Pandas documentation but unless I missed something, I didn't find anything about this type of interpolation.
I can do it manually, you just take the geometric mean, or in the case of more values, get the average growth rate by doing (final value/initial value)^(1/distance between them) and then multiply accordingly. But there's a lot of values to fill in in my Series, so how do I do this automatically? I guess I'm missing something since this seems to be something very basic.
Thank you.
You could take the logarithm of your series, interpolate lineraly and then transform it back to your exponential scale.
import pandas as pd
import numpy as np
arr = np.exp(np.arange(1,10))
arr = pd.Series(arr)
arr[3] = None
0 2.718282
1 7.389056
2 20.085537
3 NaN
4 148.413159
5 403.428793
6 1096.633158
7 2980.957987
8 8103.083928
dtype: float64
arr = np.log(arr) # Transform according to assumed process.
arr = arr.interpolate('linear') # Interpolate.
np.exp(arr) # Invert previous transformation.
0 2.718282
1 7.389056
2 20.085537
3 54.598150
4 148.413159
5 403.428793
6 1096.633158
7 2980.957987
8 8103.083928
dtype: float64
I've made a script (shown below) that helps determine local maxima points using historical stock data. It uses the daily highs to mark out local resistance levels. Works great, but what I would like is, for any given point in time (or row in the stock data), I want to know what the most recent resistance level was just prior to that point. I want this in it's own column in the dataset. So for instance:
The top grey line is the highs for each day, and the bottom grey line was the close of each day. So roughly speaking, the dataset for that section would look like this:
High Close
216.8099976 216.3399963
215.1499939 213.2299957
214.6999969 213.1499939
215.7299957 215.2799988 <- First blue dot at high
213.6900024 213.3699951
214.8800049 213.4100037 <- 2nd blue dot at high
214.5899963 213.4199982
216.0299988 215.8200073
217.5299988 217.1799927 <- 3rd blue dot at high
216.8800049 215.9900055
215.2299957 214.2400055
215.6799927 215.5700073
....
Right now, this script looks at the entire dataset at once to determine the local maxima indexes for the highs, and then for any given point in the stock history (i.e. any given row), it looks for the NEXT maxima in the list of all maximas found. This would be a way to determine where the next resistance level is, but I don't want that due to a look ahead bias. I just want to have a column of the most recent past resistance level or maybe even the latest 2 recent points in 2 columns. That would be ideal actually.
So my final output would look like this for the 1 column:
High Close Most_Rec_Max
216.8099976 216.3399963 0
215.1499939 213.2299957 0
214.6999969 213.1499939 0
215.7299957 215.2799988 0
213.6900024 213.3699951 215.7299957
214.8800049 213.4100037 215.7299957
214.5899963 213.4199982 214.8800049
216.0299988 215.8200073 214.8800049
217.5299988 217.1799927 214.8800049
216.8800049 215.9900055 217.5299988
215.2299957 214.2400055 217.5299988
215.6799927 215.5700073 217.5299988
....
You'll notice that the dot only shows up in most recent column after it has already been discovered.
Here is the code I am using:
real_close_prices = df['Close'].to_numpy()
highs = df['High'].to_numpy()
max_indexes = (np.diff(np.sign(np.diff(highs))) < 0).nonzero()[0] + 1 # local max
# +1 due to the fact that diff reduces the original index number
max_values_at_indexes = highs[max_indexes]
curr_high = [c for c in highs]
max_values_at_indexes.sort()
for m in max_values_at_indexes:
for i, c in enumerate(highs):
if m > c and curr_high[i] == c:
curr_high[i] = m
#print(nextbig)
df['High_Resistance'] = curr_high
# plot
plt.figure(figsize=(12, 5))
plt.plot(x, highs, color='grey')
plt.plot(x, real_close_prices, color='grey')
plt.plot(x[max_indexes], highs[max_indexes], "o", label="max", color='b')
plt.show()
Hoping someone will be able to help me out with this. Thanks!
Here is one approach. Once you know where the peaks are, you can store peak indices in p_ids and peak values in p_vals. To assign the k'th most recent peak, note that p_vals[:-k] will occur at p_ids[k:]. The rest is forward filling.
# find all local maxima in the series by comparing to shifted values
peaks = (df.High > df.High.shift(1)) & (df.High > df.High.shift(-1))
# pass peak value if peak is achieved and NaN otherwise
# forward fill with previous peak value & handle leading NaNs with fillna
df['Most_Rec_Max'] = (df.High * peaks.replace(False, np.nan)).ffill().fillna(0)
# for finding n-most recent peak
p_ids, = np.where(peaks)
p_vals = df.High[p_ids].values
for n in [1,2]:
col_name = f'{n+1}_Most_Rec_Max'
df[col_name] = np.nan
df.loc[p_ids[n:], col_name] = p_vals[:-n]
df[col_name].ffill(inplace=True)
df[col_name].fillna(0, inplace=True)
# High Close Most_Rec_Max 2_Most_Rec_Max 3_Most_Rec_Max
# 0 216.809998 216.339996 0.000000 0.000000 0.000000
# 1 215.149994 213.229996 0.000000 0.000000 0.000000
# 2 214.699997 213.149994 0.000000 0.000000 0.000000
# 3 215.729996 215.279999 215.729996 0.000000 0.000000
# 4 213.690002 213.369995 215.729996 0.000000 0.000000
# 5 214.880005 213.410004 214.880005 215.729996 0.000000
# 6 214.589996 213.419998 214.880005 215.729996 0.000000
# 7 216.029999 215.820007 214.880005 215.729996 0.000000
# 8 217.529999 217.179993 217.529999 214.880005 215.729996
# 9 216.880005 215.990006 217.529999 214.880005 215.729996
# 10 215.229996 214.240006 217.529999 214.880005 215.729996
# 11 215.679993 215.570007 217.529999 214.880005 215.729996
I just came across this function that might help you a lot: scipy.signal.find_peaks.
Based on your sample dataframe, we can do the following:
from scipy.signal import find_peaks
## Grab the minimum high value as a threshold.
min_high = df["High"].min()
### Run the High values through the function. The docs explain more,
### but we can set our height to the minimum high value.
### We just need one out of two return values.
peaks, _ = find_peaks(df["High"], height=min_high)
### Do some maintenance and add a column to mark peaks
# Merge on our index values
df1 = df.merge(peaks_df, how="left", left_index=True, right_index=True)
# Set non-null values to 1 and null values to 0; Convert column to integer type.
df1.loc[~df1["local_high"].isna(), "local_high"] = 1
df1.loc[df1["local_high"].isna(), "local_high"] = 0
df1["local_high"] = df1["local_high"].astype(int)
Then, your dataframe should look like the following:
High Low local_high
0 216.809998 216.339996 0
1 215.149994 213.229996 0
2 214.699997 213.149994 0
3 215.729996 215.279999 1
4 213.690002 213.369995 0
5 214.880005 213.410004 1
6 214.589996 213.419998 0
7 216.029999 215.820007 0
8 217.529999 217.179993 1
9 216.880005 215.990005 0
10 215.229996 214.240005 0
11 215.679993 215.570007 0
I have a program that ideally measures the temperature every second. However, in reality this does not happen. Sometimes, it skips a second or it breaks down for 400 seconds and then decides to start recording again. This leaves gaps in my 2-by-n dataframe, where ideally n = 86400 (the amount of seconds in a day). I want to apply some sort of moving/rolling average to it to get a nicer plot, but if I do that to the "raw" datafiles, the amount of data points becomes less. This is shown here, watch the x-axis. I know the "nice data" doesn't look nice yet; I'm just playing with some values.
So, I want to implement a data cleaning method, which adds data to the dataframe. I thought about it, but don't know how to implement it. I thought of it as follows:
If the index is not equal to the time, then we need to add a number, at time = index. If this gap is only 1 value, then the average of the previous number and the next number will do for me. But if it is bigger, say 100 seconds are missing, then a linear function needs to be made, which will increase or decrease the value steadily.
So I guess a training set could be like this:
index time temp
0 0 20.10
1 1 20.20
2 2 20.20
3 4 20.10
4 100 22.30
Here, I would like to get a value for index 3, time 3 and the values missing between time = 4 and time = 100. I'm sorry about my formatting skills, I hope it is clear.
How would I go about programming this?
Use merge with complete time column and then interpolate:
# Create your table
time = np.array([e for e in np.arange(20) if np.random.uniform() > 0.6])
temp = np.random.uniform(20, 25, size=len(time))
temps = pd.DataFrame([time, temp]).T
temps.columns = ['time', 'temperature']
>>> temps
time temperature
0 4.0 21.662352
1 10.0 20.904659
2 15.0 20.345858
3 18.0 24.787389
4 19.0 20.719487
The above is a random table generated with missing time data.
# modify it
filled = pd.Series(np.arange(temps.iloc[0,0], temps.iloc[-1, 0]+1))
filled = filled.to_frame()
filled.columns = ['time'] # Create a fully filled time column
merged = pd.merge(filled, temps, on='time', how='left') # merge it with original, time without temperature will be null
merged.temperature = merged.temperature.interpolate() # fill nulls linearly.
# Alternatively, use reindex, this does the same thing.
final = temps.set_index('time').reindex(np.arange(temps.time.min(),temps.time.max()+1)).reset_index()
final.temperature = final.temperature.interpolate()
>>> merged # or final
time temperature
0 4.0 21.662352
1 5.0 21.536070
2 6.0 21.409788
3 7.0 21.283505
4 8.0 21.157223
5 9.0 21.030941
6 10.0 20.904659
7 11.0 20.792898
8 12.0 20.681138
9 13.0 20.569378
10 14.0 20.457618
11 15.0 20.345858
12 16.0 21.826368
13 17.0 23.306879
14 18.0 24.787389
15 19.0 20.719487
First you can set the second values to actual time values as such:
df.index = pd.to_datetime(df['time'], unit='s')
After which you can use pandas' built-in time series operations to resample and fill in the missing values:
df = df.resample('s').interpolate('time')
Optionally, if you still want to do some smoothing you can use the following operation for that:
df.rolling(5, center=True, win_type='hann').mean()
Which will smooth with a 5 element wide Hanning window. Note: any window-based smoothing will cost you value points at the edges.
Now your dataframe will have datetimes (including date) as index. This is required for the resample method. If you want to lose the date, you can simply use:
df.index = df.index.time