Any faster way to get the same results? - python

I have two given arrays: x and y. I want to calculate correlation coefficient between two arrays as follows:
import numpy as np
from scipy.stats import pearsonr
x = np.array([[[1,2,3,4],
[5,6,7,8]],
[[11,22,23,24],
[25,26,27,28]]])
i,j,k = x.shape
y = np.array([[[31,32,33,34],
[35,36,37,38]],
[[41,42,43,44],
[45,46,47,48]]])
xx = np.row_stack(np.dstack(x))
yy = np.row_stack(np.dstack(y))
results = []
for a, b in zip(xx,yy):
r_sq, p_val = pearsonr(a, b)
results.append(r_sq)
results = np.array(results).reshape(j,k)
print results
[[ 1. 1. 1. 1.]
[ 1. 1. 1. 1.]]
The answer is correct. However, would like to know if there are better and faster ways of doing it using numpy and/or scipy.

An alternate way (not necessarily better) is:
xx = x.reshape(2,-1).T # faster, minor issue though
yy = y.reshape(2,-1).T
results = [pearsonr(a,b)[0] for a,b in zip(xx,yy)]
results = np.array(results).reshape(x.shape[1:])
Another current thread was discussing the use of list comprehensions to iterate over values of an array(s): Confusion about numpy's apply along axis and list comprehensions
As discussed there, an alternative is to initialize results, and fill in values during the iteration. That's probably faster for really large cases, but for modest ones, this
np.array([... for .. in ...])
is reasonable.
The deeper question is whether pearsonr, or some alternative, can calculate this correlation for many pairs, rather than just one pair. That may require studying the internals of pearsonr, or other functions in stats.
Here's a first cut at vectorizing stats.pearsonr:
def pearsonr2(a,b):
# stats.pearsonr adapted for
# x and y are (N,2) arrays
n = x.shape[1]
mx = x.mean(1)
my = y.mean(1)
xm, ym = x-mx[:,None], y-my[:,None]
r_num = np.add.reduce(xm * ym, 1)
r_den = np.sqrt(stats.ss(xm,1) * stats.ss(ym,1))
r = r_num / r_den
r = np.clip(r, -1.0, 1.0)
return r
print pearsonr2(xx,yy)
It matches your case, though these test values don't really exercise the function. I just took the pearsonr code, added the axis=1 parameter in most of the lines, and made sure everything ran. The prob step could be included with some boolean masking.
(I can add the stats.pearsonr code to my answer if needed).
This version will take any dimension a,b (as long as they are the same), and do your pearsonr calc along the designated axis. No reshaping needed.
def pearsonr_flex(a,b, axis=1):
# stats.pearsonr adapted for
# x and y are (N,2) arrays
n = x.shape[axis]
mx = x.mean(axis, keepdims=True)
my = y.mean(axis, keepdims=True)
xm, ym = x-mx, y-my
r_num = np.add.reduce(xm * ym, axis)
r_den = np.sqrt(stats.ss(xm, axis) * stats.ss(ym, axis))
r = r_num / r_den
r = np.clip(r, -1.0, 1.0)
return r
pearsonr_flex(xx, yy, 1)
preasonr_flex(x, y, 0)

Related

"Double vectorize" Numpy functions across two different arrays

Is there some efficient way to "double vectorize" a Numpy function?
Consider some function f which is vectorized over its first 3 positional arguments; its implementation consists entirely of Numpy vectorized functions (arithmetic, trigonometry, et alia) which correctly implement broadcasting.
The first two arguments of f are x and y, which represent some kind of input data. Its 3rd argument q is a parameter that controls some aspect of the computation.
In my program, I have the following:
Arrays x and y that are 1-d arrays of the same length. x[i] and y[i] correspond to the ith data point in a dataset.
Array q which is a 1-d array of different length. q[k] corresponds to some kth data point in a different collection.
I want to compute the value of f(x[i], y[i], q[k]) for any pair i, k, collecting the results in a matrix.
That is, I want to perform a vectorized version of the following calculation:
result = np.empty((len(x), len(q))
for k in range(len(q)):
for i in range(len(x)):
result[i, k] = f(x[i], y[i], q[k])
The "singly-vectorized" version (over the i index) would be:
result = np.empty((len(x), len(q))
for k in range(len(q)):
result[:, k] = f(x, y, q[k])
And this is what I currently use in my code.
Is there an efficient way to vectorize over both indexes, maybe using some broadcasting trick?
As an example of such a function f, consider the Law of Cosines:
def law_of_cosines(a, b, ϑ):
return np.sqrt(
np.square(a) +
np.square(b) +
2.0 * a * b * np.cos(ϑ)
)
Depending on the actual f, I think I'd approach it like...
# set up example variables
N, M = 11, 13
x = np.random.normal(N)
y = np.random.normal(N)
q = np.random.normal(M)
# reshape for broadcasting
X = x[:, np.newaxis]
Y = y[:, np.newaxis]
Q = q[np.newaxis, :]
f(X, Y, Q)
then if f is, I lack the proper term for it, ufunc-like, it should broadcast nicely. If it isn't, and it's hard to tell without the actual "a bit complicated" implementation", you could make it so either via changing the implementation or possibly by numba's vectorize decorator, which is how I usually do that.
You can vectorise the second "vectorised" multiplication operation using np.outer. It contains no for loops.
The following code contains a print statement with the "double-vectorised" version. I also fixed the minor typo in brackets in the creation of the empty results matrix.
import numpy as np
x = [0.1, 0.2, 0.3]
y = [0.4, 0.6, 0.8]
q = [0.4, 0.5, 0.6, 0.7]
result = np.empty((len(x),len(q)))
for k in range(len(q)):
for i in range(len(x)):
result[i, k] = np.arctan2(x[i], y[i]) * q[k]
print(result)
print(np.outer(np.arctan2(x, y) , q))
Results:
[[0.09799147 0.12248933 0.1469872 0.17148506]
[0.12870022 0.16087528 0.19305033 0.22522539]
[0.14350827 0.17938534 0.2152624 0.25113947]]
[[0.09799147 0.12248933 0.1469872 0.17148506]
[0.12870022 0.16087528 0.19305033 0.22522539]
[0.14350827 0.17938534 0.2152624 0.25113947]]
Hope this helps.

How to write a function, that generates a vector recursively in Python?

How can I write a recursive function to generate a vector X of size (1,n) as follows, where X_i is the i-th entry:
X_1 = Z_1 * E_1
X_i = max{B_(1,i) * X_1, ... , B_((i-1),i) * X_(i-1), Z_i} * E_i, i = 2,...,n,
where
Z = np.random.normal(0, 1,size = n)
E = np.random.lognormal(0, 1, size = n)
B = np.random.uniform(0,1,(n,n))
I do not have any experience with recursive functions, that is why I can not present any code with which I tried to solve this.
If you're working with numpy, then use all the power of numpy, not just the random module ;)
And if you work with vectors, then forget about recursion and use numpy's vectorised operations. For example, np.max gives you the maximum over an axis, np.dot gives you element-wise multiplication. You also have np.prod for the product of array elements over a given axis... Those are just examples that might fit your problem well. For a full documentation, https://docs.scipy.org/doc/numpy/
I got it, one does not need a recursion as #meowgoesthedog stated in the first comment.
import numpy as np
s=1000 # sample size
n=5
Z = np.random.normal(0, 1,size = (s,n))
B = np.random.uniform(0,1,(n,n))
E = np.random.lognormal(0, 1, size = (s,n))
X = np.zeros((s,n))
X[:,0] = Z[:,0]*E[:,0]
for k in range(s):
for l in range(1,n):
X[k,l] = max(np.max(X[k,:(l)] * B[:(l),l]), Z[k,l]) * E[k,l]

Correctly annotate a numba function using jit

I started with this code to calculate a simple matrix multiplication. It runs with %timeit in around 7.85s on my machine.
To try to speed this up I tried cython which reduced the time to 0.4s. I want to also try to use numba jit compiler to see if I can get similar speed ups (with less effort). But adding the #jit annotation appears to give exactly the same timings (~7.8s). I know it can't figure out the types of the calculate_z_numpy() call but I'm not sure what I can do to coerce it. Any ideas?
from numba import jit
import numpy as np
#jit('f8(c8[:],c8[:],uint)')
def calculate_z_numpy(q, z, maxiter):
"""use vector operations to update all zs and qs to create new output array"""
output = np.resize(np.array(0, dtype=np.int32), q.shape)
for iteration in range(maxiter):
z = z*z + q
done = np.greater(abs(z), 2.0)
q = np.where(done, 0+0j, q)
z = np.where(done, 0+0j, z)
output = np.where(done, iteration, output)
return output
def calc_test():
w = h = 1000
maxiter = 1000
# make a list of x and y values which will represent q
# xx and yy are the co-ordinates, for the default configuration they'll look like:
# if we have a 1000x1000 plot
# xx = [-2.13, -2.1242,-2.1184000000000003, ..., 0.7526000000000064, 0.7584000000000064, 0.7642000000000064]
# yy = [1.3, 1.2948, 1.2895999999999999, ..., -1.2844000000000058, -1.2896000000000059, -1.294800000000006]
x1, x2, y1, y2 = -2.13, 0.77, -1.3, 1.3
x_step = (float(x2 - x1) / float(w)) * 2
y_step = (float(y1 - y2) / float(h)) * 2
y = np.arange(y2,y1-y_step,y_step,dtype=np.complex)
x = np.arange(x1,x2,x_step)
q1 = np.empty(y.shape[0],dtype=np.complex)
q1.real = x
q1.imag = y
# Transpose y
x_y_square_matrix = x+y[:, np.newaxis] # it is np.complex128
# convert square matrix to a flatted vector using ravel
q2 = np.ravel(x_y_square_matrix)
# create z as a 0+0j array of the same length as q
# note that it defaults to reals (float64) unless told otherwise
z = np.zeros(q2.shape, np.complex128)
output = calculate_z_numpy(q2, z, maxiter)
print(output)
calc_test()
I figured out how to do this with some help from someone else.
#jit('i4[:](c16[:],c16[:],i4,i4[:])',nopython=True)
def calculate_z_numpy(q, z, maxiter,output):
"""use vector operations to update all zs and qs to create new output array"""
for iteration in range(maxiter):
for i in range(len(z)):
z[i] = z[i] + q[i]
if z[i] > 2:
output[i] = iteration
z[i] = 0+0j
q[i] = 0+0j
return output
What I learnt is that use numpy datastructures as inputs (for typing), but within use c like paradigms for looping.
This runs in 402ms which is a touch faster than cython code 0.45s so for fairly minimal work in rewriting the loop explicitly we have a python version faster than C(just).

numpy.polyfit with adapted parameters

Regarding to this: polynomial equation parameters
where I get 3 parameters for a squared function y = a*x² + b*x + c now I want only to get the first parameter for a squared function which describes my function y = a*x². With other words: I want to set b=c=0 and get the adapted parameter for a. In case I understand it right, polyfit isn't able to do this.
This can be done by numpy.linalg.lstsq. To explain how to use it, it is maybe easiest to show how you would do a standard 2nd order polyfit 'by hand'. Assuming you have your measurement vectors x and y, you first construct a so-called design matrix M like so:
M = np.column_stack((x**2, x, np.ones_like(x)))
after which you can obtain the usual coefficients as the least-square solution to the equation M * k = y using lstsq like this:
k, _, _, _ = np.linalg.lstsq(M, y)
where k is the column vector [a, b, c] with the usual coefficients. Note that lstsq returns some other parameters, which you can ignore. This is a very powerful trick, which allows you to fit y to any linear combination of the columns you put into your design matrix. It can be used e.g. for 2D fits of the type z = a * x + b * y (see e.g. this example, where I used the same trick in Matlab), or polyfits with missing coefficients like in your problem.
In your case, the design matrix is simply a single column containing x**2. Quick example:
import numpy as np
import matplotlib.pylab as plt
# generate some noisy data
x = np.arange(1000)
y = 0.0001234 * x**2 + 3*np.random.randn(len(x))
# do fit
M = np.column_stack((x**2,)) # construct design matrix
k, _, _, _ = np.linalg.lstsq(M, y) # least-square fit of M * k = y
# quick plot
plt.plot(x, y, '.', x, k*x**2, 'r', linewidth=3)
plt.legend(('measurement', 'fit'), loc=2)
plt.title('best fit: y = {:.8f} * x**2'.format(k[0]))
plt.show()
Result:
The coefficients are get to minimize the squared error, you don't assign them. However, you can set some of the coefficients to zero if they are too much insignificant. E.g., I have a list of points on curve y = 33*x²:
In [51]: x=np.arange(20)
In [52]: y=33*x**2 #y = 33*x²
In [53]: coeffs=np.polyfit(x, y, 2)
In [54]: coeffs
Out[54]: array([ 3.30000000e+01, 8.99625199e-14, -7.62430619e-13])
In [55]: epsilon=np.finfo(np.float32).eps
In [56]: coeffs[np.abs(coeffs)<epsilon]=0
In [57]: coeffs
Out[57]: array([ 33., 0., 0.])

Scipy Fast 1-D interpolation without any loop

I have two 2D array, x(ni, nj) and y(ni,nj), that I need to interpolate over one axis. I want to interpolate along last axis for every ni.
I wrote
import numpy as np
from scipy.interpolate import interp1d
z = np.asarray([200,300,400,500,600])
out = []
for i in range(ni):
f = interp1d(x[i,:], y[i,:], kind='linear')
out.append(f(z))
out = np.asarray(out)
However, I think this method is inefficient and slow due to loop if array size is too large. What is the fastest way to interpolate multi-dimensional array like this? Is there any way to perform linear and cubic interpolation without loop? Thanks.
The method you propose does have a python loop, so for large values of ni it is going to get slow. That said, unless you are going to have large ni you shouldn't worry much.
I have created sample input data with the following code:
def sample_data(n_i, n_j, z_shape) :
x = np.random.rand(n_i, n_j) * 1000
x.sort()
x[:,0] = 0
x[:, -1] = 1000
y = np.random.rand(n_i, n_j)
z = np.random.rand(*z_shape) * 1000
return x, y, z
And have tested them with this two versions of linear interpolation:
def interp_1(x, y, z) :
rows, cols = x.shape
out = np.empty((rows,) + z.shape, dtype=y.dtype)
for j in xrange(rows) :
out[j] =interp1d(x[j], y[j], kind='linear', copy=False)(z)
return out
def interp_2(x, y, z) :
rows, cols = x.shape
row_idx = np.arange(rows).reshape((rows,) + (1,) * z.ndim)
col_idx = np.argmax(x.reshape(x.shape + (1,) * z.ndim) > z, axis=1) - 1
ret = y[row_idx, col_idx + 1] - y[row_idx, col_idx]
ret /= x[row_idx, col_idx + 1] - x[row_idx, col_idx]
ret *= z - x[row_idx, col_idx]
ret += y[row_idx, col_idx]
return ret
interp_1 is an optimized version of your code, following Dave's answer. interp_2 is a vectorized implementation of linear interpolation that avoids any python loop whatsoever. Coding something like this requires a sound understanding of broadcasting and indexing in numpy, and some things are going to be less optimized than what interp1d does. A prime example being finding the bin in which to interpolate a value: interp1d will surely break out of loops early once it finds the bin, the above function is comparing the value to all bins.
So the result is going to be very dependent on what n_i and n_j are, and even how long your array z of values to interpolate is. If n_j is small and n_i is large, you should expect an advantage from interp_2, and from interp_1 if it is the other way around. Smaller z should be an advantage to interp_2, longer ones to interp_1.
I have actually timed both approaches with a variety of n_i and n_j, for z of shape (5,) and (50,), here are the graphs:
So it seems that for z of shape (5,) you should go with interp_2 whenever n_j < 1000, and with interp_1 elsewhere. Not surprisingly, the threshold is different for z of shape (50,), now being around n_j < 100. It seems tempting to conclude that you should stick with your code if n_j * len(z) > 5000, but change it to something like interp_2 above if not, but there is a great deal of extrapolating in that statement! If you want to further experiment yourself, here's the code I used to produce the graphs.
n_s = np.logspace(1, 3.3, 25)
int_1 = np.empty((len(n_s),) * 2)
int_2 = np.empty((len(n_s),) * 2)
z_shape = (5,)
for i, n_i in enumerate(n_s) :
print int(n_i)
for j, n_j in enumerate(n_s) :
x, y, z = sample_data(int(n_i), int(n_j), z_shape)
int_1[i, j] = min(timeit.repeat('interp_1(x, y, z)',
'from __main__ import interp_1, x, y, z',
repeat=10, number=1))
int_2[i, j] = min(timeit.repeat('interp_2(x, y, z)',
'from __main__ import interp_2, x, y, z',
repeat=10, number=1))
cs = plt.contour(n_s, n_s, np.transpose(int_1-int_2))
plt.clabel(cs, inline=1, fontsize=10)
plt.xlabel('n_i')
plt.ylabel('n_j')
plt.title('timeit(interp_2) - timeit(interp_1), z.shape=' + str(z_shape))
plt.show()
One optimization is to allocate the result array once like so:
import numpy as np
from scipy.interpolate import interp1d
z = np.asarray([200,300,400,500,600])
out = np.zeros( [ni, len(z)], dtype=np.float32 )
for i in range(ni):
f = interp1d(x[i,:], y[i,:], kind='linear')
out[i,:]=f(z)
This will save you some memory copying that occurs in your implementation, which occurs in the calls to out.append(...).

Categories