I have a dataset storing marathon segment splits (5K, 10K, ...) in seconds and identifiers (age, gender, country) as columns and individuals as rows. Each cell for a marathon segment split column may contain either a float (specifying the number of seconds required to reach the segment) or "NaN". A row may contain up to 4 NaN values. Here is some sample data:
Age M/F Country 5K 10K 15K 20K Half Official Time
2323 38 M CHI 1300.0 2568.0 3834.0 5107.0 5383.0 10727.0
2324 23 M USA 1286.0 2503.0 3729.0 4937.0 5194.0 10727.0
2325 36 M USA 1268.0 2519.0 3775.0 5036.0 5310.0 10727.0
2326 37 M POL 1234.0 2484.0 3723.0 4972.0 5244.0 10727.0
2327 32 M DEN NaN 2520.0 3782.0 5046.0 5319.0 10728.0
I intend to calculate a best fit line for marathon split times (using only the columns between "5K" to "Half") for each row with at least one NaN; from the best fit line for the row, I want to impute a data point to replace the NaN with.
From the sample data, I intend to calculate a best fit line for row 2327 only (using values 2520.0, 3782.0, 5046.0, and 5319.0). Using this best fit line for row 2327, I intend to replace the NaN 5K time with the predicted 5K time.
How can I calculate this best fit line for each row with NaN?
Thanks in advance.
I "extrapolated" a solution from here from 2015 https://stackoverflow.com/a/31340344/6366770 (pun intended). Extrapolation definition I am not sure if in 2021 pandas has reliable extrapolation methods, so you might have to use scipy or other libraries.
When doing the Extrapolation , I excluded the "Half" column. That's because the running distances of 5K, 10K, 15K and 20K are 100% linear. It is literally a straight line if you exclude the half marathon column. But, that doesn't mean that expected running times are linear. Obviously, as you run a longer distance your average time per kilometer is lower. But, this "gets the job done" without getting too involved in an incredibly complex calculation.
Also, this is worth noting. Let's say that the first column was 1K instead of 5K. Then, this method would fail. It only works because the distances are linear. If it was 1K, you would also have to use the data from the rows of the other runners, unless you were making calculations based off the kilometers in the column names themselves. Either way, this is an imperfect solution, but much better than pd.interpolation. I linked another potential solution in the comments of tdy's answer.
import scipy as sp
import pandas as pd
# we focus on the four numeric columns from 5K-20K and and Transpose the dataframe, since we are going horizontally across columns. T
#T he index must also be numeric, so we drop it, but don't worry, we add back just the numbers and maintain the index later on.
df_extrap = df.iloc[:,4:8].T.reset_index(drop=True)
# create a scipy interpolation function to be called by a custom extrapolation function later on
def scipy_interpolate_func(s):
s_no_nan = s.dropna()
return sp.interpolate.interp1d(s_no_nan.index.values, s_no_nan.values, kind='linear', bounds_error=False)
def my_extrapolate_func(scipy_interpolate_func, new_x):
x1, x2 = scipy_interpolate_func.x[0], scipy_interpolate_func.x[-1]
y1, y2 = scipy_interpolate_func.y[0], scipy_interpolate_func.y[-1]
slope = (y2 - y1) / (x2 - x1)
return y1 + slope * (new_x - x1)
#Concat each extrapolated column altogether and transpose back to initial shape to be added to the original dataframe
s_extrapolated = pd.concat([pd.Series(my_extrapolate_func(scipy_interpolate_func(df_extrap[s]),
df_extrap[s].index.values),
index=df_extrap[s].index) for s in df_extrap.columns], axis=1).T
cols = ['5K', '10K', '15K', '20K']
df[cols] = s_extrapolated
df
Out[1]:
index Age M/F Country 5K 10K 15K 20K Half \
0 2323 38 M CHI 1300.0 2569.0 3838.0 5107.0 5383.0
1 2324 23 M USA 1286.0 2503.0 3720.0 4937.0 5194.0
2 2325 36 M USA 1268.0 2524.0 3780.0 5036.0 5310.0
3 2326 37 M POL 1234.0 2480.0 3726.0 4972.0 5244.0
4 2327 32 M DEN 1257.0 2520.0 3783.0 5046.0 5319.0
Official Time
0 10727.0
1 10727.0
2 10727.0
3 10727.0
4 10728.0
Related
I'm new to Python and I want to understand how I can remove values from my dataset that are 0.00000
In context, I am working on the dataset https://www.kaggle.com/ksuchris2000/oklahoma-earthquakes-and-saltwater-injection-wells
The file InjectionWells.csv has some values in their coordinates (LAT and LONG) which I need to remove but I don't know exactly how. This is so I can make a scatterplot with X longitude and Y latitude
I tried the following but didn't work. Can you please guide me?
You need to discover the outlier values on LAT, LONG
your plot is one way, but here's an automated way
First, use dat.info() to see which columns are numeric, what the dtypes are. You are interested in LAT, LONG.
Use dat[['LAT','LONG']].describe() on your two columns of interest to get descriptive statistics and find out their outlier values.
.describe() takes an argument percentiles which is a list, it defaults to
[.25, .5, .75], which returns the 25th, 50th, and 75th percentiles.
..but you want to exclude rare/outlier values, so try including (say) the 1st/99th and 5th/95th percentiles also:
>>> pd.options.display.float_format = '{:.2f}'.format # suppress unwanted dp's
>>> dat[['LAT','LONG']].describe(percentiles=[.01,.05,.1,.25,.5,.9,.95,.99])
# OR:
>>> dat[dat['LAT'].between(33.97,36.96) & dat['LONG'].between(-101.80,-95.48)]
LAT LONG
count 11125.00 11125.00
mean 35.21 -96.85
std 2.69 7.58
min 0.00 -203.63
1% 33.97 -101.80 # <---- 1st percentile
5% 34.20 -99.76
10% 34.29 -98.25
25% 34.44 -97.63
50% 35.15 -97.37
90% 36.78 -95.95
95% 36.85 -95.74
99% 36.96 -95.48 # <---- 99th percentile
max 73.99 97.70
So the 1st-99th percentile ranges of your LAT and LONG values are:
33.97 <= LAT <= 36.96
-101.80 <= LONG <= -95.48
So now you can exclude these with a one-line apply(..., axis=1):
dat2 = dat[ dat.apply(lambda row: (33.97<=row['LAT']<= 36.96) and (-101.80<=row['LONG']<=-95.48), axis=1) ]
API# Operator Operator ID WellType ... ZONE Unnamed: 18 Unnamed: 19 Unnamed: 20
0 3500300026.00 PHOENIX PETROCORP INC 19499.00 2R ... CHEROKEE NaN NaN NaN
... ... ... ... ... ... ... ... ... ...
11121 3515323507.00 SANDRIDGE EXPLORATION & PRODUCTION LLC 22281.00 2D ... MUSSELLEM, OKLAHOMA NaN NaN NaN
[10760 rows x 21 columns]
Note this has gone from 11125 down to 10760 rows. So we dropped 365 rows.
Finally it's always a good idea to check that the extreme values of your filtered LAT, LONG are in the range you expected:
>>> dat2[['LAT','LONG']].describe(percentiles=[.01,.05,.1,.25,.5,.9,.95,.99])
LAT LONG
count 10760.00 10760.00
mean 35.33 -97.25
std 0.91 1.11
min 33.97 -101.76
1% 34.08 -101.62
5% 34.21 -99.19
10% 34.30 -98.20
25% 34.44 -97.62
50% 35.13 -97.36
90% 36.77 -95.99
95% 36.83 -95.80
99% 36.93 -95.56
max 36.96 -95.49
PS there's nothing magical about taking 1st/99th percentiles. You can play with the describe(... percentiles) yourself. You could use 0.005, 0.002, 0.001 percentiles etc. - you get to decide what constitutes an outlier.
You can create a Boolean series by comparing a column of a dataframe to a single value. Then you can use that series to index the dataframe, so that only those rows that meet the condition are selected:
data = df[['LONG', 'LAT']]
data = data[data['LONG'] < -75]
I have a Pandas Series that contains the price evolution of a product (my country has high inflation), or say, the amount of coronavirus infected people in a certain country. The values in both of these datasets grows exponentially; that means that if you had something like [3, NaN, 27] you'd want to interpolate so that the missing value is filled with 9 in this case. I checked the interpolation method in the Pandas documentation but unless I missed something, I didn't find anything about this type of interpolation.
I can do it manually, you just take the geometric mean, or in the case of more values, get the average growth rate by doing (final value/initial value)^(1/distance between them) and then multiply accordingly. But there's a lot of values to fill in in my Series, so how do I do this automatically? I guess I'm missing something since this seems to be something very basic.
Thank you.
You could take the logarithm of your series, interpolate lineraly and then transform it back to your exponential scale.
import pandas as pd
import numpy as np
arr = np.exp(np.arange(1,10))
arr = pd.Series(arr)
arr[3] = None
0 2.718282
1 7.389056
2 20.085537
3 NaN
4 148.413159
5 403.428793
6 1096.633158
7 2980.957987
8 8103.083928
dtype: float64
arr = np.log(arr) # Transform according to assumed process.
arr = arr.interpolate('linear') # Interpolate.
np.exp(arr) # Invert previous transformation.
0 2.718282
1 7.389056
2 20.085537
3 54.598150
4 148.413159
5 403.428793
6 1096.633158
7 2980.957987
8 8103.083928
dtype: float64
I have a big amount of data with me(93 files, ~150mb each). The data is a time series, i.e, information about a given set of coordinates(3.3 million latitude-longitude values) is recorded and stored everyday for 93 days, and the whole data is broken up into 93 files respectively. Example of two such files:
Day 1:
lon lat A B day1
68.4 8.4 NaN 20 20
68.4 8.5 16 20 18
68.6 8.4 NaN NaN NaN
.
.
Day 2:
lon lat C D day2
68.4 8.4 NaN NaN NaN
68.4 8.5 24 25 24.5
68.6 8.4 NaN NaN NaN
.
.
I am interested in understanding the nature of the missing data in the columns 'day1', 'day2', 'day3', etc. For example, if the values missing in the concerned columns are evenly distributed among all the set of coordinates then the data is probably missing at random, but if the missing values are concentrated more in a particular set of coordinates then my data will become biased. Consider the way my data is divided into multiple files of large sizes and isn't in a very standard form to operate on making it harder to use some tools.
I am looking for a diagnostic tool or visualization in python that can check/show how the missing data is distributed over the set of coordinates so I can impute/ignore it appropriately.
Thanks.
P.S: This is the first time I am handling missing data so it would be great to see if there exists a workflow which people who do similar kind of work follow.
Assuming that you read file and name it df. You can count amount of NaNs using:
df.isnull().sum()
It will return you amount of NaNs per column.
You could also use:
df.isnull().sum(axis=1).value_counts()
This on the other hand will sum number of NaNs per row and then calculate number of rows with no NaNs, 1 NaN, 2 NaNs and so on.
Regarding working with files of such size, to speed up loading data and processing it I recommend using Dask and change format of your files preferably to parquet so that you can read and write to it in parallel.
You could easily recreate function above in Dask like this:
from dask import dataframe as dd
dd.read_parquet(file_path).isnull().sum().compute()
Answering the comment question:
Use .loc to slice your dataframe, in code below I choose all rows : and two columns ['col1', 'col2'].
df.loc[:, ['col1', 'col2']].isnull().sum(axis=1).value_counts()
I have a DataFrame for a fast Fourier transformed signal.
There is one column for the frequency in Hz and another column for the corresponding amplitude.
I have read a post made a couple of years ago, that you can use a simple boolean function to exclude or only include outliers in the final data frame that are above or below a few standard deviations.
df = pd.DataFrame({'Data':np.random.normal(size=200)}) # example dataset of normally distributed data.
df[~(np.abs(df.Data-df.Data.mean())>(3*df.Data.std()))] # or if you prefer the other way around
The problem is that my signal drops several magnitudes (up to 10 000 times smaller) as frequency increases up to 50 000Hz. Therefore, I am unable to use a function that only exports values above 3 standard deviation because I will only pick up the "peaks" outliers from the first 50 Hz.
Is there a way I can export outliers in my dataframe that are above 3 rolling standard deviations of a rolling mean instead?
This is maybe best illustrated with a quick example. Basically you're comparing your existing data to a new column that is the rolling mean plus three standard deviations, also on a rolling basis.
import pandas as pd
import numpy as np
np.random.seed(123)
df = pd.DataFrame({'Data':np.random.normal(size=200)})
# Create a few outliers (3 of them, at index locations 10, 55, 80)
df.iloc[[10, 55, 80]] = 40.
r = df.rolling(window=20) # Create a rolling object (no computation yet)
mps = r.mean() + 3. * r.std() # Combine a mean and stdev on that object
print(df[df.Data > mps.Data]) # Boolean filter
# Data
# 55 40.0
# 80 40.0
To add a new column filtering only to outliers, with NaN elsewhere:
df['Peaks'] = df['Data'].where(df.Data > mps.Data, np.nan)
print(df.iloc[50:60])
Data Peaks
50 -1.29409 NaN
51 -1.03879 NaN
52 1.74371 NaN
53 -0.79806 NaN
54 0.02968 NaN
55 40.00000 40.0
56 0.89071 NaN
57 1.75489 NaN
58 1.49564 NaN
59 1.06939 NaN
Here .where returns
An object of same shape as self and whose corresponding entries are
from self where cond is True and otherwise are from other.
I'm trying to find a way to iterate code for a linear regression over many many columns, upwards of Z3. Here is a snippet of the dataframe called df1
Time A1 A2 A3 B1 B2 B3
1 1.00 6.64 6.82 6.79 6.70 6.95 7.02
2 2.00 6.70 6.86 6.92 NaN NaN NaN
3 3.00 NaN NaN NaN 7.07 7.27 7.40
4 4.00 7.15 7.26 7.26 7.19 NaN NaN
5 5.00 NaN NaN NaN NaN 7.40 7.51
6 5.50 7.44 7.63 7.58 7.54 NaN NaN
7 6.00 7.62 7.86 7.71 NaN NaN NaN
This code returns the slope coefficient of a linear regression for the very ONE column only and concatenates the value to a numpy series called series, here is what it looks like for extracting the slope for the first column:
from sklearn.linear_model import LinearRegression
series = np.array([]) #blank list to append result
df2 = df1[~np.isnan(df1['A1'])] #removes NaN values for each column to apply sklearn function
df3 = df2[['Time','A1']]
npMatrix = np.matrix(df3)
X, Y = npMatrix[:,0], npMatrix[:,1]
slope = LinearRegression().fit(X,Y) # either this or the next line
m = slope.coef_[0]
series= np.concatenate((SGR_trips, m), axis = 0)
As it stands now, I am using this slice of code, replacing "A1" with a new column name all the way up to "Z3" and this is extremely inefficient. I know there are many easy way to do this with some modules but I have the drawback of having all these intermediate NaN values in the timeseries so it seems like I'm limited to this method, or something like it.
I tried using a for loop such as:
for col in df1.columns:
and replacing 'A1', for example with col in the code, but this does not seem to be working.
Is there any way I can do this more efficiently?
Thank you!
One liner (or three)
time = df[['Time']]
pd.DataFrame(np.linalg.pinv(time.T.dot(time)).dot(time.T).dot(df.fillna(0)),
['Slope'], df.columns)
Broken down with a bit of explanation
Using the closed form of OLS
In this case X is time where we define time as df[['Time']]. I used the double brackets to preserve the dataframe and its two dimensions. If I'd done single brackets, I'd have gotten a series and its one dimension. Then the dot products aren't as pretty.
is np.linalg.pinv(time.T.dot(time)).dot(time.T)
Y is df.fillna(0). Yes, we could have done one column at a time, but why when we could do it altogether. You have to deal with the NaNs. How would you imagine dealing with them? Only doing it over the time you had data? That is equivalent to placing zeroes in the NaN spots. So, I did.
Finally, I use pd.DataFrame(stuff, ['Slope'], df.columns) to contain all slopes in one place with the original labels.
Note that I calculated the slope of the regression for Time against itself. Why not? It was there. Its value is 1.0. Great! I probably did it right!
Looping is a decent strategy for a modest number (say, fewer than thousands) of columns. Without seeing your implementation, I can't say what's wrong, but here's my version, which works:
slopes = []
for c in cols:
if c=="Time": break
mask = ~np.isnan(df1[c])
x = np.atleast_2d(df1.Time[mask].values).T
y = np.atleast_2d(df1[c][mask].values).T
reg = LinearRegression().fit(x, y)
slopes.append(reg.coef_[0])
I've simplified your code a bit to avoid creating so many temporary DataFrame objects, but it should work fine your way too.