numpy pad array with nan, getting strange float instead - python

I'm trying to pad an array with np.nan
import numpy as np
print np.version.version
# 1.10.2
combine = lambda real, theo: np.vstack((theo, np.pad(real, (0, theo.shape[0] - real.shape[0]), 'constant', constant_values=np.nan)))
real = np.arange(20)
theoretical = np.linspace(0, 20, 100)
result = combine(real, theoretical)
np.any(np.isnan(result))
# False
Inspecting result, it seems instead of np.nan, the array is getting padded with -9.22337204e+18. What's going on here? How can I get np.nan?

The result of pad has the same type as the input. np.nan is a float
In [874]: np.pad(np.ones(2,dtype=int),1,mode='constant',constant_values=(np.nan,))
Out[874]: array([-2147483648, 1, 1, -2147483648])
In [875]: np.pad(np.ones(2,dtype=float),1,mode='constant',constant_values=(np.nan,))
Out[875]: array([ nan, 1., 1., nan])
The int pad is np.nan cast as an integer:
In [878]: np.array(np.nan).astype(int)
Out[878]: array(-2147483648)

Related

Numpy floor float values to int

I have array of floats, and I want to floor them to nearest integer, so I can use them as indices.
For example:
In [2]: import numpy as np
In [3]: arr = np.random.rand(1, 10) * 10
In [4]: arr
Out[4]:
array([[4.97896461, 0.21473121, 0.13323678, 3.40534157, 5.08995577,
6.7924586 , 1.82584208, 6.73890807, 2.45590354, 9.85600841]])
In [5]: arr = np.floor(arr)
In [6]: arr
Out[6]: array([[4., 0., 0., 3., 5., 6., 1., 6., 2., 9.]])
In [7]: arr.dtype
Out[7]: dtype('float64')
They are still floats after flooring, is there a way to automatically cast them to integers?
I am edit answer with #DanielF explanation:
"floor doesn't convert to integer, it just gives integer-valued floats, so you still need an astype to change to int"
Check this code to understand the solution:
import numpy as np
arr = np.random.rand(1, 10) * 10
print(arr)
arr = np.floor(arr).astype(int)
print(arr)
OUTPUT:
[[2.76753828 8.84095843 2.5537759 5.65017407 7.77493733 6.47403036
7.72582766 5.03525625 9.75819442 9.10578944]]
[[2 8 2 5 7 6 7 5 9 9]]
Why not just use:
np.random.randint(1,10)
As alternative to changing type after floor division, you can provide an output array of the desired data type to np.floor (and to any other numpy ufunc). For example, imagine you want to convert the output to np.int32, then do the following:
import numpy as np
arr = np.random.rand(1, 10) * 10
out = np.empty_like(arr, dtype=np.int32)
np.floor(arr, out=out, casting='unsafe')
As the casting argument already indicates, you should know what you are doing when casting outputs into different types. However, in your case it is not really unsafe.
Although, I would not call np.floor in your case, because all values are greater than zero. Therefore, the simplest and probably fastest solution to your problem would be a direct casting to integer.
import numpy as np
arr = (np.random.rand(1, 10) * 10).astype(int)

Pearson correlation and nan values

I have two CSV_files with hundreds of columns and I want to calculate Pearson correlation coefficient and p value for every same columns of two CSV_files. The problem is that when there is a missing data "NaN" in one column, it gives me an error. When ".dropna" removes nan value from columns, sometimes the shapes of X and Y are not equal (based on removed nan values) and I receive this error:
"ValueError: operands could not be broadcast together with shapes (1020,) (1016,)"
Question: If row #8 in one csv in "nan", is there any way to remove the same row from the other csv too and do the analysis for every column based on rows that have values from both csv files?
import pandas as pd
import scipy
import csv
import numpy as np
from scipy import stats
df = pd.read_csv ("D:/Insitu-Daily.csv",header = None)
dg = pd.read_csv ("D:/Model-Daily.csv",header = None)
pearson_corr_set = []
pearson_p_set = []
for i in range(1,df.shape[1]):
X= df[i].dropna(axis=0, how='any')
Y= dg[i].dropna(axis=0, how='any')
[pearson_corr, pearson_p] = scipy.stats.stats.pearsonr(X, Y)
pearson_corr_set = np.append(pearson_corr_set,pearson_corr)
pearson_p_set = np.append(pearson_p_set,pearson_p)
with open('D:/Results.csv','wb') as file:
str1 = ",".join(str(i) for i in np.asarray(pearson_corr_set))
file.write(str1)
file.write('\n')
str1 = ",".join(str(i) for i in np.asarray(pearson_p_set))
file.write(str1)
file.write('\n')
Here is one solution. First calculate the "bad" indices for your 2 numpy arrays. Then mask to ignore those bad indices.
x = np.array([5, 1, 6, 9, 10, np.nan, 1, 1, np.nan])
y = np.array([4, 4, 5, np.nan, 6, 2, 1, 8, 1])
bad = ~np.logical_or(np.isnan(x), np.isnan(y))
np.compress(bad, x) # array([ 5., 1., 6., 10., 1., 1.])
np.compress(bad, y) # array([ 4., 4., 5., 6., 1., 8.])
Instead of dropna, try using isnan and boolean indexing:
for i in range(1, df.shape[1]):
df_sub = df[i]
dg_sub = dg[i]
mask = ~np.isnan(df_sub) & ~np.isnan(dg_sub)
# mask array is now true where ith rows of df and dg are NOT nan.
X = df_sub[mask] # this returns a 1D array of length mask.sum()
Y = df_sub[mask]
... your code continues.
Hope that helps!
Why not combine them to one single df and just use dropna on it.
all values will be removed.
newdf=pd.concat([df, dg], axis=1, sort=False)
newdf.dropna()
I suggest to get a list of column names of both df, and use that in the for loop:
dfnames=list(df.columns.values)
dgnames=list(dg.columns.values)
for i in range(len(dfnames)):
X= newdf[dfnames[i]].dropna(axis=0, how='any')
Y= newdf[dgnames[i]].dropna(axis=0, how='any')
[pearson_corr, pearson_p] = scipy.stats.stats.pearsonr(X, Y)
pearson_corr_set = np.append(pearson_corr_set,pearson_corr)
pearson_p_set = np.append(pearson_p_set,pearson_p)
also, you can just csv withtout that for loop. read pandas.DataFrame.to_csv

Efficient way of merging two numpy masked arrays

I have two numpy masked arrays which I want to merge. I'm using the following code:
import numpy as np
a = np.zeros((10000, 10000), dtype=np.int16)
a[:5000, :5000] = 1
am = np.ma.masked_equal(a, 0)
b = np.zeros((10000, 10000), dtype=np.int16)
b[2500:7500, 2500:7500] = 2
bm = np.ma.masked_equal(b, 0)
arr = np.ma.array(np.dstack((am, bm)), mask=np.dstack((am.mask, bm.mask)))
arr = np.prod(arr, axis=2)
plt.imshow(arr)
The problem is that the np.prod() operation is very slow (4 seconds in my computer). Is there an alternative way of getting a merged array in a more efficient way?
Instead of your last two lines using dstack() and prod(), try this:
arr = np.ma.array(am.filled(1) * bm.filled(1), mask=(am.mask * bm.mask))
Now you don't need prod() at all, and you avoid allocating the 3D array entirely.
I took another approach that may not be particularly efficient, but is reasonably easy to extend and implement.
(I know I'm answering a question that is over 3 years old with functionality that has been around in numpy a long time, but bear with me)
The np.where function in numpy has two main purposes (it is a bit weird), the first is to give you indices for a boolean array:
>>> import numpy as np
>>> a = np.arange(12).reshape(3, 4)
>>> a
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
>>> m = (a % 3 == 0)
>>> m
array([[ True, False, False, True],
[False, False, True, False],
[False, True, False, False]], dtype=bool)
>>> row_ind, col_ind = np.where(m)
>>> row_ind
array([0, 0, 1, 2])
>>> col_ind
array([0, 3, 2, 1])
The other purpose of the np.where function is to pick from two arrays based on whether the given boolean array is True/False:
>>> np.where(m, a, np.zeros(a.shape))
array([[ 0., 0., 0., 3.],
[ 0., 0., 6., 0.],
[ 0., 9., 0., 0.]])
Turns out, there is also a numpy.ma.where which deals with masked arrays...
Given a list of masked arrays of the same shape, my code then looks like:
merged = masked_arrays[0]
for ma in masked_arrays[1:]:
merged = np.ma.where(ma.mask, merged, ma)
As I say, not particularly efficient, but certainly easy enough to implement.
HTH
Inspired by the accepted answer I've found a simple way of merging masked arrays. It works making some logical operations on the masks and simply adding 0 filled arrays.
import numpy as np
a = np.zeros((1000, 1000), dtype=np.int16)
a[:500, :500] = 2
am = np.ma.masked_equal(a, 0)
b = np.zeros((1000, 1000), dtype=np.int16)
b[250:750, 250:750] = 3
bm = np.ma.masked_equal(b, 0)
c = np.zeros((1000, 1000), dtype=np.int16)
c[500:1000, 500:1000] = 5
cm = np.ma.masked_equal(c, 0)
bm.mask = np.logical_or(np.logical_and(am.mask, bm.mask), np.logical_not(am.mask))
am = np.ma.array(am.filled(0) + bm.filled(0), mask=(am.mask * bm.mask))
cm.mask = np.logical_or(np.logical_and(am.mask, cm.mask), np.logical_not(am.mask))
am = np.ma.array(am.filled(0) + cm.filled(0), mask=(am.mask * cm.mask))
plt.imshow(am)
I hope someone find this helpful sometime. Masked arrays doesn't seem to be very efficient though. So, if someone finds an alternative to merge arrays I'd be happy to know.
Update: Based on #morningsun comment this implementation is 30% faster and much simpler:
import numpy as np
a = np.zeros((1000, 1000), dtype=np.int16)
a[:500, :500] = 2
am = np.ma.masked_equal(a, 0)
b = np.zeros((1000, 1000), dtype=np.int16)
b[250:750, 250:750] = 3
bm = np.ma.masked_equal(b, 0)
c = np.zeros((1000, 1000), dtype=np.int16)
c[500:1000, 500:1000] = 5
cm = np.ma.masked_equal(c, 0)
am[am.mask] = bm[am.mask]
am[am.mask] = cm[am.mask]
plt.imshow(am)

computing with nan's with numpy's ma module

I do not understand the behavior of this numpy.ma.max (min, mean, etc.)
import numpy as np
arr = np.ma.array([0,np.nan,1])
np.ma.max(arr)
-> nan
I thought this was supposed to return a value excluding nan's? The only way I can get a real value is
np.nanmax(np.asarray(arr))
Is this right, or am I using numpy.ma.max incorrectly?
You need to create the mask:
import numpy as np
arr = np.ma.array([0,np.nan,1])
print(np.ma.max(arr))
# >>>nan # since there is no mask
marr = np.ma.masked_array([0,np.nan,1], np.isnan(arr))
print(np.ma.max(marr))
# >>>1.0 # since the mask tells mask to ignore the nan. The max of the rest (0,1) is 1.
A straightforward way to create the mask is to use the np.ma.masked_invalid function (see. http://docs.scipy.org/doc/numpy/reference/generated/numpy.ma.masked_invalid.html#numpy.ma.masked_invalid)
Here is an example:
# Makes example reproducible
np.random.seed(seed=1337)
# Generate some data
X = np.random.random((5,5))
X[X > .5] = np.nan
print X
array([[ 0.26202468, 0.15868397, 0.27812652, 0.45931689, 0.32100054],
[ nan, 0.26194293, nan, nan, 0.11527423],
[ 0.38627507, nan, 0.12505793, nan, 0.44322487],
[ nan, nan, 0.36126157, 0.41610394, nan],
[ nan, 0.18780841, 0.28816715, nan, 0.49964826]])
# Mask will hide both np.nan and np.inf values
masked_X = np.ma.masked_invalid(X, copy=False)
# Voila
print np.max(masked_X, axis=0)
masked_array(data = [0.38627506863435945 0.26194292556514465 0.36126157241743073
0.45931688721456665 0.49964826137201246],
mask = [False False False False False],
fill_value = 1e+20)

Interpolate NaN values in a numpy array

Is there a quick way of replacing all NaN values in a numpy array with (say) the linearly interpolated values?
For example,
[1 1 1 nan nan 2 2 nan 0]
would be converted into
[1 1 1 1.3 1.6 2 2 1 0]
Lets define first a simple helper function in order to make it more straightforward to handle indices and logical indices of NaNs:
import numpy as np
def nan_helper(y):
"""Helper to handle indices and logical indices of NaNs.
Input:
- y, 1d numpy array with possible NaNs
Output:
- nans, logical indices of NaNs
- index, a function, with signature indices= index(logical_indices),
to convert logical indices of NaNs to 'equivalent' indices
Example:
>>> # linear interpolation of NaNs
>>> nans, x= nan_helper(y)
>>> y[nans]= np.interp(x(nans), x(~nans), y[~nans])
"""
return np.isnan(y), lambda z: z.nonzero()[0]
Now the nan_helper(.) can now be utilized like:
>>> y= array([1, 1, 1, NaN, NaN, 2, 2, NaN, 0])
>>>
>>> nans, x= nan_helper(y)
>>> y[nans]= np.interp(x(nans), x(~nans), y[~nans])
>>>
>>> print y.round(2)
[ 1. 1. 1. 1.33 1.67 2. 2. 1. 0. ]
---
Although it may seem first a little bit overkill to specify a separate function to do just things like this:
>>> nans, x= np.isnan(y), lambda z: z.nonzero()[0]
it will eventually pay dividends.
So, whenever you are working with NaNs related data, just encapsulate all the (new NaN related) functionality needed, under some specific helper function(s). Your code base will be more coherent and readable, because it follows easily understandable idioms.
Interpolation, indeed, is a nice context to see how NaN handling is done, but similar techniques are utilized in various other contexts as well.
I came up with this code:
import numpy as np
nan = np.nan
A = np.array([1, nan, nan, 2, 2, nan, 0])
ok = -np.isnan(A)
xp = ok.ravel().nonzero()[0]
fp = A[-np.isnan(A)]
x = np.isnan(A).ravel().nonzero()[0]
A[np.isnan(A)] = np.interp(x, xp, fp)
print A
It prints
[ 1. 1.33333333 1.66666667 2. 2. 1. 0. ]
Just use numpy logical and there where statement to apply a 1D interpolation.
import numpy as np
from scipy import interpolate
def fill_nan(A):
'''
interpolate to fill nan values
'''
inds = np.arange(A.shape[0])
good = np.where(np.isfinite(A))
f = interpolate.interp1d(inds[good], A[good],bounds_error=False)
B = np.where(np.isfinite(A),A,f(inds))
return B
For two dimensional data, the SciPy's griddata works fairly well for me:
>>> import numpy as np
>>> from scipy.interpolate import griddata
>>>
>>> # SETUP
>>> a = np.arange(25).reshape((5, 5)).astype(float)
>>> a
array([[ 0., 1., 2., 3., 4.],
[ 5., 6., 7., 8., 9.],
[ 10., 11., 12., 13., 14.],
[ 15., 16., 17., 18., 19.],
[ 20., 21., 22., 23., 24.]])
>>> a[np.random.randint(2, size=(5, 5)).astype(bool)] = np.NaN
>>> a
array([[ nan, nan, nan, 3., 4.],
[ nan, 6., 7., nan, nan],
[ 10., nan, nan, 13., nan],
[ 15., 16., 17., nan, 19.],
[ nan, nan, 22., 23., nan]])
>>>
>>> # THE INTERPOLATION
>>> x, y = np.indices(a.shape)
>>> interp = np.array(a)
>>> interp[np.isnan(interp)] = griddata(
... (x[~np.isnan(a)], y[~np.isnan(a)]), # points we know
... a[~np.isnan(a)], # values we know
... (x[np.isnan(a)], y[np.isnan(a)])) # points to interpolate
>>> interp
array([[ nan, nan, nan, 3., 4.],
[ nan, 6., 7., 8., 9.],
[ 10., 11., 12., 13., 14.],
[ 15., 16., 17., 18., 19.],
[ nan, nan, 22., 23., nan]])
I am using it on 3D images, operating on 2D slices (4000 slices of 350x350). The whole operation still takes about an hour :/
Or building on Winston's answer
def pad(data):
bad_indexes = np.isnan(data)
good_indexes = np.logical_not(bad_indexes)
good_data = data[good_indexes]
interpolated = np.interp(bad_indexes.nonzero()[0], good_indexes.nonzero()[0], good_data)
data[bad_indexes] = interpolated
return data
A = np.array([[1, 20, 300],
[nan, nan, nan],
[3, 40, 500]])
A = np.apply_along_axis(pad, 0, A)
print A
Result
[[ 1. 20. 300.]
[ 2. 30. 400.]
[ 3. 40. 500.]]
It might be easier to change how the data is being generated in the first place, but if not:
bad_indexes = np.isnan(data)
Create a boolean array indicating where the nans are
good_indexes = np.logical_not(bad_indexes)
Create a boolean array indicating where the good values area
good_data = data[good_indexes]
A restricted version of the original data excluding the nans
interpolated = np.interp(bad_indexes.nonzero(), good_indexes.nonzero(), good_data)
Run all the bad indexes through interpolation
data[bad_indexes] = interpolated
Replace the original data with the interpolated values.
I use the interpolation for replacing all NaN values.
A = np.array([1, nan, nan, 2, 2, nan, 0])
np.interp(np.arange(len(A)),
np.arange(len(A))[np.isnan(A) == False],
A[np.isnan(A) == False])
Output :
array([1. , 1.33333333, 1.66666667, 2. , 2. , 1. , 0. ])
I needed an approach that would also fill in NaN's at the start of end of the data, which the main answer does not appear to do.
The function I came up with uses a linear regression to fill in the NaN's. This overcomes my problem:
import numpy as np
def linearly_interpolate_nans(y):
# Fit a linear regression to the non-nan y values
# Create X matrix for linreg with an intercept and an index
X = np.vstack((np.ones(len(y)), np.arange(len(y))))
# Get the non-NaN values of X and y
X_fit = X[:, ~np.isnan(y)]
y_fit = y[~np.isnan(y)].reshape(-1, 1)
# Estimate the coefficients of the linear regression
beta = np.linalg.lstsq(X_fit.T, y_fit)[0]
# Fill in all the nan values using the predicted coefficients
y.flat[np.isnan(y)] = np.dot(X[:, np.isnan(y)].T, beta)
return y
Here's an example usage case:
# Make an array according to some linear function
y = np.arange(12) * 1.5 + 10.
# First and last value are NaN
y[0] = np.nan
y[-1] = np.nan
# 30% of other values are NaN
for i in range(len(y)):
if np.random.rand() > 0.7:
y[i] = np.nan
# NaN's are filled in!
print (y)
print (linearly_interpolate_nans(y))
Slightly optimized version based on response of BRYAN WOODS. It handles starting and ending values of source data correctly, and it is faster on 25-30% than original version. Also you may use different kinds of interpolations (see scipy.interpolate.interp1d documentations for details).
import numpy as np
from scipy.interpolate import interp1d
def fill_nans_scipy1(padata, pkind='linear'):
"""
Interpolates data to fill nan values
Parameters:
padata : nd array
source data with np.NaN values
Returns:
nd array
resulting data with interpolated values instead of nans
"""
aindexes = np.arange(padata.shape[0])
agood_indexes, = np.where(np.isfinite(padata))
f = interp1d(agood_indexes
, padata[agood_indexes]
, bounds_error=False
, copy=False
, fill_value="extrapolate"
, kind=pkind)
return f(aindexes)
In [17]: adata = np.array([1, 2, np.NaN, 4])
Out[18]: array([ 1., 2., nan, 4.])
In [19]: fill_nans_scipy1(adata)
Out[19]: array([1., 2., 3., 4.])
Building on the answer by Bryan Woods, I modified his code to also convert lists consisting only of NaN to a list of zeros:
def fill_nan(A):
'''
interpolate to fill nan values
'''
inds = np.arange(A.shape[0])
good = np.where(np.isfinite(A))
if len(good[0]) == 0:
return np.nan_to_num(A)
f = interp1d(inds[good], A[good], bounds_error=False)
B = np.where(np.isfinite(A), A, f(inds))
return B
Simple addition, I hope it will be of use to someone.
Interpolation and extrapolation with padding keywords
The following solution interpolates the nan values in an array by np.interp, if a finite value is present on both sides. Nan values at the borders are handled by np.pad with modes like constant or reflect.
import numpy as np
import matplotlib.pyplot as plt
def extrainterpolate_nans_1d(
arr, kws_pad=({'mode': 'edge'}, {'mode': 'edge'})
):
"""Interpolates and extrapolates nan values.
Interpolation is linear, compare np.interp(..).
Extrapolation works with pad keywords, compare np.pad(..).
Parameters
----------
arr : np.ndarray, shape (N,)
Array to replace nans in.
kws_pad : dict or (dict, dict)
kwargs for np.pad on left and right side
Returns
-------
bool
Description of return value
See Also
--------
https://numpy.org/doc/stable/reference/generated/numpy.interp.html
https://numpy.org/doc/stable/reference/generated/numpy.pad.html
https://stackoverflow.com/a/43821453/7128154
"""
assert arr.ndim == 1
if isinstance(kws_pad, dict):
kws_pad_left = kws_pad
kws_pad_right = kws_pad
else:
assert len(kws_pad) == 2
assert isinstance(kws_pad[0], dict)
assert isinstance(kws_pad[1], dict)
kws_pad_left = kws_pad[0]
kws_pad_right = kws_pad[1]
arr_ip = arr.copy()
# interpolation
inds = np.arange(len(arr_ip))
nan_msk = np.isnan(arr_ip)
arr_ip[nan_msk] = np.interp(inds[nan_msk], inds[~nan_msk], arr[~nan_msk])
# detemine pad range
i0 = next(
(ids for ids, val in np.ndenumerate(arr) if not np.isnan(val)), 0)[0]
i1 = next(
(ids for ids, val in np.ndenumerate(arr[::-1]) if not np.isnan(val)), 0)[0]
i1 = len(arr) - i1
# print('pad in range [0:{:}] and [{:}:{:}]'.format(i0, i1, len(arr)))
# pad
arr_pad = np.pad(
arr_ip[i0:], pad_width=[(i0, 0)], **kws_pad_left)
arr_pad = np.pad(
arr_pad[:i1], pad_width=[(0, len(arr) - i1)], **kws_pad_right)
return arr_pad
# setup data
ys = np.arange(30, dtype=float)**2/20
ys[:5] = np.nan
ys[20:] = 20
ys[28:] = np.nan
ys[[7, 13, 14, 18, 22]] = np.nan
ys_ie0 = extrainterpolate_nans_1d(ys)
kws_pad_sym = {'mode': 'symmetric'}
kws_pad_const7 = {'mode': 'constant', 'constant_values':7.}
ys_ie1 = extrainterpolate_nans_1d(ys, kws_pad=(kws_pad_sym, kws_pad_const7))
ys_ie2 = extrainterpolate_nans_1d(ys, kws_pad=(kws_pad_const7, kws_pad_sym))
fig, ax = plt.subplots()
ax.scatter(np.arange(len(ys)), ys, s=15**2, label='ys')
ax.scatter(np.arange(len(ys)), ys_ie0, s=8**2, label='ys_ie0, left_pad edge, right_pad edge')
ax.scatter(np.arange(len(ys)), ys_ie1, s=6**2, label='ys_ie1, left_pad symmetric, right_pad 7')
ax.scatter(np.arange(len(ys)), ys_ie2, s=4**2, label='ys_ie2, left_pad 7, right_pad symmetric')
ax.legend()
As suggested by an earlier comment, the best way to do this is to use a peer reviewed implementation. The pandas library has an interpolation method for 1d data, which interpolates np.nan values in Series or DataFrame:
pandas.Series.interpolate or pandas.DataFrame.interpolate
The documentation is very concise, recommend reading through! My implementation:
import pandas as pd
magnitudes_series = pd.Series(magnitudes) # Convert np.array to pd.Series
magnitudes_series.interpolate(
# I used "akima" because the second derivative of my data has frequent drops to 0
method=interpolation_method,
# Interpolate from both sides of the sequence, up to you (made sense for my data)
limit_direction="both",
# Interpolate only np.nan sequences that have number sequences at the ends of the respective np.nan sequences
limit_area="inside",
inplace=True,
)
# I chose to remove np.nan at the tails of data sequence
magnitudes_series.dropna(inplace=True)
result_in_numpy_array = magnitudes_series.values
Importing scipy looks like overkill to me. Here's a simple way using numpy and maintaining the same conventions as np.interp
def interp_nans(x:[float],left=None, right=None, period=None)->[float]:
"""
e.g. [1 1 1 nan nan 2 2 nan 0] -> [1 1 1 1.3 1.6 2 2 1 0]
"""
xp = [i for i, yi in enumerate(x) if np.isfinite(yi)]
fp = [yi for i, yi in enumerate(x) if np.isfinite(yi)]
return list(np.interp(x=list(range(len(x))), xp=xp, fp=fp,left=left,right=right,period=period))

Categories