Scipy Fast 1-D interpolation without any loop - python

I have two 2D array, x(ni, nj) and y(ni,nj), that I need to interpolate over one axis. I want to interpolate along last axis for every ni.
I wrote
import numpy as np
from scipy.interpolate import interp1d
z = np.asarray([200,300,400,500,600])
out = []
for i in range(ni):
f = interp1d(x[i,:], y[i,:], kind='linear')
out.append(f(z))
out = np.asarray(out)
However, I think this method is inefficient and slow due to loop if array size is too large. What is the fastest way to interpolate multi-dimensional array like this? Is there any way to perform linear and cubic interpolation without loop? Thanks.

The method you propose does have a python loop, so for large values of ni it is going to get slow. That said, unless you are going to have large ni you shouldn't worry much.
I have created sample input data with the following code:
def sample_data(n_i, n_j, z_shape) :
x = np.random.rand(n_i, n_j) * 1000
x.sort()
x[:,0] = 0
x[:, -1] = 1000
y = np.random.rand(n_i, n_j)
z = np.random.rand(*z_shape) * 1000
return x, y, z
And have tested them with this two versions of linear interpolation:
def interp_1(x, y, z) :
rows, cols = x.shape
out = np.empty((rows,) + z.shape, dtype=y.dtype)
for j in xrange(rows) :
out[j] =interp1d(x[j], y[j], kind='linear', copy=False)(z)
return out
def interp_2(x, y, z) :
rows, cols = x.shape
row_idx = np.arange(rows).reshape((rows,) + (1,) * z.ndim)
col_idx = np.argmax(x.reshape(x.shape + (1,) * z.ndim) > z, axis=1) - 1
ret = y[row_idx, col_idx + 1] - y[row_idx, col_idx]
ret /= x[row_idx, col_idx + 1] - x[row_idx, col_idx]
ret *= z - x[row_idx, col_idx]
ret += y[row_idx, col_idx]
return ret
interp_1 is an optimized version of your code, following Dave's answer. interp_2 is a vectorized implementation of linear interpolation that avoids any python loop whatsoever. Coding something like this requires a sound understanding of broadcasting and indexing in numpy, and some things are going to be less optimized than what interp1d does. A prime example being finding the bin in which to interpolate a value: interp1d will surely break out of loops early once it finds the bin, the above function is comparing the value to all bins.
So the result is going to be very dependent on what n_i and n_j are, and even how long your array z of values to interpolate is. If n_j is small and n_i is large, you should expect an advantage from interp_2, and from interp_1 if it is the other way around. Smaller z should be an advantage to interp_2, longer ones to interp_1.
I have actually timed both approaches with a variety of n_i and n_j, for z of shape (5,) and (50,), here are the graphs:
So it seems that for z of shape (5,) you should go with interp_2 whenever n_j < 1000, and with interp_1 elsewhere. Not surprisingly, the threshold is different for z of shape (50,), now being around n_j < 100. It seems tempting to conclude that you should stick with your code if n_j * len(z) > 5000, but change it to something like interp_2 above if not, but there is a great deal of extrapolating in that statement! If you want to further experiment yourself, here's the code I used to produce the graphs.
n_s = np.logspace(1, 3.3, 25)
int_1 = np.empty((len(n_s),) * 2)
int_2 = np.empty((len(n_s),) * 2)
z_shape = (5,)
for i, n_i in enumerate(n_s) :
print int(n_i)
for j, n_j in enumerate(n_s) :
x, y, z = sample_data(int(n_i), int(n_j), z_shape)
int_1[i, j] = min(timeit.repeat('interp_1(x, y, z)',
'from __main__ import interp_1, x, y, z',
repeat=10, number=1))
int_2[i, j] = min(timeit.repeat('interp_2(x, y, z)',
'from __main__ import interp_2, x, y, z',
repeat=10, number=1))
cs = plt.contour(n_s, n_s, np.transpose(int_1-int_2))
plt.clabel(cs, inline=1, fontsize=10)
plt.xlabel('n_i')
plt.ylabel('n_j')
plt.title('timeit(interp_2) - timeit(interp_1), z.shape=' + str(z_shape))
plt.show()

One optimization is to allocate the result array once like so:
import numpy as np
from scipy.interpolate import interp1d
z = np.asarray([200,300,400,500,600])
out = np.zeros( [ni, len(z)], dtype=np.float32 )
for i in range(ni):
f = interp1d(x[i,:], y[i,:], kind='linear')
out[i,:]=f(z)
This will save you some memory copying that occurs in your implementation, which occurs in the calls to out.append(...).

Related

Does the LCG fail the Kolmogorov-Smirnov test as badly as my code suggests?

I use the following Python code to illustrate the generation of random variables to students:
import numpy as np
import scipy.stats as stats
def lcg(n, x0, M=2**32, a=1103515245, c=12345):
result = np.zeros(n)
for i in range(n):
result[i] = (a*x0 + c) % M
x0 = result[i]
return np.array([x/M for x in result])
x = lcg(10**6, 3)
print(stats.kstest(x, 'uniform'))
The default parameters are the ones used by glibc, according to Wikipedia. The last line of the code prints
KstestResult(statistic=0.043427751892089805, pvalue=0.0)
The pvalue of 0.0 indicates that the observation would basically never occur if the elements of x were truly distributed according to a uniform distribution.
My question is: is there a bug in my code, or does the LCG with the parameters given not pass the Kolmogorov-Smirnov test with 10**6 replicas?
There is problem with your code, it makes uniform distribution like
I've changed your LCG implementation a bit, and all is good now (Python 3.7, Anaconda, Win10 x64)
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
def lcg(n, x0, M=2**32, a=1103515245, c=12345):
result = np.zeros(n)
for i in range(n):
x0 = (a*x0 + c) % M
result[i] = x0
return np.array([x/float(M) for x in result])
#x = np.random.uniform(0.0, 1.0, 1000000)
x = lcg(1000000, 3)
print(stats.kstest(x, 'uniform'))
count, bins, ignored = plt.hist(x, 15, density=True)
plt.plot(bins, np.ones_like(bins), linewidth=2, color='r')
plt.show()
which prints
KstestResult(statistic=0.0007238884545415214, pvalue=0.6711878724246786)
and plots
UPDATE
as #pjs pointed out, you'd better divide by float(M) right in the loop, no need for
second pass over whole array
def lcg(n, x0, M=2**32, a=1103515245, c=12345):
result = np.empty(n)
for i in range(n):
x0 = (a*x0 + c) % M
result[i] = x0 / float(M)
return result
To complement Severin's answer, the reason my code was not working properly is that result was an array of floating point numbers.
We can see the difference between the two implementations already at the second iteration.
After the first iteration, x0 = 3310558080.
In [9]: x0 = 3310558080
In [10]: float_x0 = float(x0)
In [11]: (a*x0 + c) % M
Out[11]: 465823161
In [12]: (a*float_x0 + c) % M
Out[12]: 465823232.0
In [13]: a*x0
Out[13]: 3653251310737929600
In [14]: a*float_x0
Out[14]: 3.6532513107379297e+18
So the problem had to do with the use of floating point numbers.

How to write a function, that generates a vector recursively in Python?

How can I write a recursive function to generate a vector X of size (1,n) as follows, where X_i is the i-th entry:
X_1 = Z_1 * E_1
X_i = max{B_(1,i) * X_1, ... , B_((i-1),i) * X_(i-1), Z_i} * E_i, i = 2,...,n,
where
Z = np.random.normal(0, 1,size = n)
E = np.random.lognormal(0, 1, size = n)
B = np.random.uniform(0,1,(n,n))
I do not have any experience with recursive functions, that is why I can not present any code with which I tried to solve this.
If you're working with numpy, then use all the power of numpy, not just the random module ;)
And if you work with vectors, then forget about recursion and use numpy's vectorised operations. For example, np.max gives you the maximum over an axis, np.dot gives you element-wise multiplication. You also have np.prod for the product of array elements over a given axis... Those are just examples that might fit your problem well. For a full documentation, https://docs.scipy.org/doc/numpy/
I got it, one does not need a recursion as #meowgoesthedog stated in the first comment.
import numpy as np
s=1000 # sample size
n=5
Z = np.random.normal(0, 1,size = (s,n))
B = np.random.uniform(0,1,(n,n))
E = np.random.lognormal(0, 1, size = (s,n))
X = np.zeros((s,n))
X[:,0] = Z[:,0]*E[:,0]
for k in range(s):
for l in range(1,n):
X[k,l] = max(np.max(X[k,:(l)] * B[:(l),l]), Z[k,l]) * E[k,l]

Python: Scipy's curve_fit for NxM arrays?

Usually I use Scipy.optimize.curve_fit to fit custom functions to data.
Data in this case was always a 1 dimensional array.
Is there a similiar function for a two dimensional array?
So, for example, I have a 10x10 numpy array. Then I have a function that does some stuff and creates a 10x10 numpy array, and I want to fit the function, so that the resulting 10x10 array has the best fit to the input array.
Maybe an example is better :)
data = pyfits.getdata('data.fits') #fits is an image format, this gives me a NxM numpy array
mod1 = pyfits.getdata('mod1.fits')
mod2 = pyfits.getdata('mod2.fits')
mod3 = pyfits.getdata('mod3.fits')
mod1_1D = numpy.ravel(mod1)
mod2_1D = numpy.ravel(mod2)
mod3_1D = numpy.ravel(mod3)
def dostuff(a,b): #originaly this is a function for 2D arrays
newdata = (mod1_1D*12)+(mod2_1D)**a - mod3_1D/b
return newdata
Now a and b should be fitted, so that newdata is as close as possible to data.
What I got so far:
data1D = numpy.ravel(data)
data_X = numpy.arange(data1D.size)
fit = curve_fit(dostuff,data_X,data1D)
But print fit only gives me
(array([ 1.]), inf)
I do have some nans in the arrays, maybe thats a problem?
The goal is to express the 2D function as a 1D function: g(x, y, ...) --> f(xy, ...)
Converting the coordinate pair (x, y) into a single number xy may seem tricky at first. But it's actually quite simple. Just enumerate all data points and you have a single number that uniquely defines each coordinate pair. The fitted function simply has to reconstruct the original coordinates, do it's calculations and return the result.
Example that fits a 2D linear gradient in a 20x10 image:
import scipy as sp
import numpy as np
import matplotlib.pyplot as plt
n, m = 10, 20
# noisy example data
x = np.arange(m).reshape(1, m)
y = np.arange(n).reshape(n, 1)
z = x + y * 2 + np.random.randn(n, m) * 3
def f(xy, a, b):
i = xy // m # reconstruct y coordinates
j = xy % m # reconstruct x coordinates
out = i * a + j * b
return out
xy = np.arange(z.size) # 0 is the top left pixel and 199 is the top right pixel
res = sp.optimize.curve_fit(f, xy, np.ravel(z))
z_est = f(xy, *res[0])
z_est2d = z_est.reshape(n, m)
plt.subplot(2, 1, 1)
plt.plot(np.ravel(z), label='original')
plt.plot(z_est, label='fitted')
plt.legend()
plt.subplot(2, 2, 3)
plt.imshow(z)
plt.xlabel('original')
plt.subplot(2, 2, 4)
plt.imshow(z_est2d)
plt.xlabel('fitted')
I would recommend using symfit for this, I wrote that to take care of all of the magic for you automatically.
In symfit you would just write the equation pretty much as you would on paper, and then you can run the fit.
I would do something like this:
from symfit import parameters, variables, Fit
# Assuming all this data is in the form of NxM arrays
data = pyfits.getdata('data.fits')
mod1 = pyfits.getdata('mod1.fits')
mod2 = pyfits.getdata('mod2.fits')
mod3 = pyfits.getdata('mod3.fits')
a, b = parameters('a, b')
x, y, z, u = variables('x, y, z, u')
model = {u: (x * 12) + y**a - z / b}
fit = Fit(model, x=mod1, y=mod2, z=mod3, u=data)
fit_result = fit.execute()
print(fit_result)
Unfortunatelly I have not yet included examples of the kind you need in the docs yet, but if you just look at the docs I think you can figure it out in case this doesn't work out of the box.

Any faster way to get the same results?

I have two given arrays: x and y. I want to calculate correlation coefficient between two arrays as follows:
import numpy as np
from scipy.stats import pearsonr
x = np.array([[[1,2,3,4],
[5,6,7,8]],
[[11,22,23,24],
[25,26,27,28]]])
i,j,k = x.shape
y = np.array([[[31,32,33,34],
[35,36,37,38]],
[[41,42,43,44],
[45,46,47,48]]])
xx = np.row_stack(np.dstack(x))
yy = np.row_stack(np.dstack(y))
results = []
for a, b in zip(xx,yy):
r_sq, p_val = pearsonr(a, b)
results.append(r_sq)
results = np.array(results).reshape(j,k)
print results
[[ 1. 1. 1. 1.]
[ 1. 1. 1. 1.]]
The answer is correct. However, would like to know if there are better and faster ways of doing it using numpy and/or scipy.
An alternate way (not necessarily better) is:
xx = x.reshape(2,-1).T # faster, minor issue though
yy = y.reshape(2,-1).T
results = [pearsonr(a,b)[0] for a,b in zip(xx,yy)]
results = np.array(results).reshape(x.shape[1:])
Another current thread was discussing the use of list comprehensions to iterate over values of an array(s): Confusion about numpy's apply along axis and list comprehensions
As discussed there, an alternative is to initialize results, and fill in values during the iteration. That's probably faster for really large cases, but for modest ones, this
np.array([... for .. in ...])
is reasonable.
The deeper question is whether pearsonr, or some alternative, can calculate this correlation for many pairs, rather than just one pair. That may require studying the internals of pearsonr, or other functions in stats.
Here's a first cut at vectorizing stats.pearsonr:
def pearsonr2(a,b):
# stats.pearsonr adapted for
# x and y are (N,2) arrays
n = x.shape[1]
mx = x.mean(1)
my = y.mean(1)
xm, ym = x-mx[:,None], y-my[:,None]
r_num = np.add.reduce(xm * ym, 1)
r_den = np.sqrt(stats.ss(xm,1) * stats.ss(ym,1))
r = r_num / r_den
r = np.clip(r, -1.0, 1.0)
return r
print pearsonr2(xx,yy)
It matches your case, though these test values don't really exercise the function. I just took the pearsonr code, added the axis=1 parameter in most of the lines, and made sure everything ran. The prob step could be included with some boolean masking.
(I can add the stats.pearsonr code to my answer if needed).
This version will take any dimension a,b (as long as they are the same), and do your pearsonr calc along the designated axis. No reshaping needed.
def pearsonr_flex(a,b, axis=1):
# stats.pearsonr adapted for
# x and y are (N,2) arrays
n = x.shape[axis]
mx = x.mean(axis, keepdims=True)
my = y.mean(axis, keepdims=True)
xm, ym = x-mx, y-my
r_num = np.add.reduce(xm * ym, axis)
r_den = np.sqrt(stats.ss(xm, axis) * stats.ss(ym, axis))
r = r_num / r_den
r = np.clip(r, -1.0, 1.0)
return r
pearsonr_flex(xx, yy, 1)
preasonr_flex(x, y, 0)

Correctly annotate a numba function using jit

I started with this code to calculate a simple matrix multiplication. It runs with %timeit in around 7.85s on my machine.
To try to speed this up I tried cython which reduced the time to 0.4s. I want to also try to use numba jit compiler to see if I can get similar speed ups (with less effort). But adding the #jit annotation appears to give exactly the same timings (~7.8s). I know it can't figure out the types of the calculate_z_numpy() call but I'm not sure what I can do to coerce it. Any ideas?
from numba import jit
import numpy as np
#jit('f8(c8[:],c8[:],uint)')
def calculate_z_numpy(q, z, maxiter):
"""use vector operations to update all zs and qs to create new output array"""
output = np.resize(np.array(0, dtype=np.int32), q.shape)
for iteration in range(maxiter):
z = z*z + q
done = np.greater(abs(z), 2.0)
q = np.where(done, 0+0j, q)
z = np.where(done, 0+0j, z)
output = np.where(done, iteration, output)
return output
def calc_test():
w = h = 1000
maxiter = 1000
# make a list of x and y values which will represent q
# xx and yy are the co-ordinates, for the default configuration they'll look like:
# if we have a 1000x1000 plot
# xx = [-2.13, -2.1242,-2.1184000000000003, ..., 0.7526000000000064, 0.7584000000000064, 0.7642000000000064]
# yy = [1.3, 1.2948, 1.2895999999999999, ..., -1.2844000000000058, -1.2896000000000059, -1.294800000000006]
x1, x2, y1, y2 = -2.13, 0.77, -1.3, 1.3
x_step = (float(x2 - x1) / float(w)) * 2
y_step = (float(y1 - y2) / float(h)) * 2
y = np.arange(y2,y1-y_step,y_step,dtype=np.complex)
x = np.arange(x1,x2,x_step)
q1 = np.empty(y.shape[0],dtype=np.complex)
q1.real = x
q1.imag = y
# Transpose y
x_y_square_matrix = x+y[:, np.newaxis] # it is np.complex128
# convert square matrix to a flatted vector using ravel
q2 = np.ravel(x_y_square_matrix)
# create z as a 0+0j array of the same length as q
# note that it defaults to reals (float64) unless told otherwise
z = np.zeros(q2.shape, np.complex128)
output = calculate_z_numpy(q2, z, maxiter)
print(output)
calc_test()
I figured out how to do this with some help from someone else.
#jit('i4[:](c16[:],c16[:],i4,i4[:])',nopython=True)
def calculate_z_numpy(q, z, maxiter,output):
"""use vector operations to update all zs and qs to create new output array"""
for iteration in range(maxiter):
for i in range(len(z)):
z[i] = z[i] + q[i]
if z[i] > 2:
output[i] = iteration
z[i] = 0+0j
q[i] = 0+0j
return output
What I learnt is that use numpy datastructures as inputs (for typing), but within use c like paradigms for looping.
This runs in 402ms which is a touch faster than cython code 0.45s so for fairly minimal work in rewriting the loop explicitly we have a python version faster than C(just).

Categories