Is there some efficient way to "double vectorize" a Numpy function?
Consider some function f which is vectorized over its first 3 positional arguments; its implementation consists entirely of Numpy vectorized functions (arithmetic, trigonometry, et alia) which correctly implement broadcasting.
The first two arguments of f are x and y, which represent some kind of input data. Its 3rd argument q is a parameter that controls some aspect of the computation.
In my program, I have the following:
Arrays x and y that are 1-d arrays of the same length. x[i] and y[i] correspond to the ith data point in a dataset.
Array q which is a 1-d array of different length. q[k] corresponds to some kth data point in a different collection.
I want to compute the value of f(x[i], y[i], q[k]) for any pair i, k, collecting the results in a matrix.
That is, I want to perform a vectorized version of the following calculation:
result = np.empty((len(x), len(q))
for k in range(len(q)):
for i in range(len(x)):
result[i, k] = f(x[i], y[i], q[k])
The "singly-vectorized" version (over the i index) would be:
result = np.empty((len(x), len(q))
for k in range(len(q)):
result[:, k] = f(x, y, q[k])
And this is what I currently use in my code.
Is there an efficient way to vectorize over both indexes, maybe using some broadcasting trick?
As an example of such a function f, consider the Law of Cosines:
def law_of_cosines(a, b, ϑ):
return np.sqrt(
np.square(a) +
np.square(b) +
2.0 * a * b * np.cos(ϑ)
)
Depending on the actual f, I think I'd approach it like...
# set up example variables
N, M = 11, 13
x = np.random.normal(N)
y = np.random.normal(N)
q = np.random.normal(M)
# reshape for broadcasting
X = x[:, np.newaxis]
Y = y[:, np.newaxis]
Q = q[np.newaxis, :]
f(X, Y, Q)
then if f is, I lack the proper term for it, ufunc-like, it should broadcast nicely. If it isn't, and it's hard to tell without the actual "a bit complicated" implementation", you could make it so either via changing the implementation or possibly by numba's vectorize decorator, which is how I usually do that.
You can vectorise the second "vectorised" multiplication operation using np.outer. It contains no for loops.
The following code contains a print statement with the "double-vectorised" version. I also fixed the minor typo in brackets in the creation of the empty results matrix.
import numpy as np
x = [0.1, 0.2, 0.3]
y = [0.4, 0.6, 0.8]
q = [0.4, 0.5, 0.6, 0.7]
result = np.empty((len(x),len(q)))
for k in range(len(q)):
for i in range(len(x)):
result[i, k] = np.arctan2(x[i], y[i]) * q[k]
print(result)
print(np.outer(np.arctan2(x, y) , q))
Results:
[[0.09799147 0.12248933 0.1469872 0.17148506]
[0.12870022 0.16087528 0.19305033 0.22522539]
[0.14350827 0.17938534 0.2152624 0.25113947]]
[[0.09799147 0.12248933 0.1469872 0.17148506]
[0.12870022 0.16087528 0.19305033 0.22522539]
[0.14350827 0.17938534 0.2152624 0.25113947]]
Hope this helps.
Related
The following function is written on Matlab. Now, I need to write an equivalent python function that will produce a similar output as Matlab. Can you help write the code, please?
function CORR=function_AutoCorr(tau,y)
% This function will generate a matrix, Where on-diagonal elements are autocorrelation and
% off-diagonal elements are cross-correlations
% y is the data set. e.g., a 10 by 9 Matrix.
% tau is the lag value. e.g. tau=1
Size=size(y);
N=Size(1,2); % Number of columns
T=Size(1,1); % length of the rows
for i=1:N
for j=1:N
temp1=0;
for t=1:T-tau
G=0.5*((y(t+tau,i)*y(t,j))+(y(t+tau,j)*y(t,i)));
temp1=temp1+G;
end
CORR(i,j)=temp1/(T-tau);
end
end
end
Assuming that y is a numpy Array, it would be pretty near something like (although I have not tested):
import numpy as np
def function_AutoCorr(tau, y):
Size = y.shape
N = Size[1]
T = Size[0]
CORR = np.zeros(shape=(N,N))
for i in range(N):
for j in range(N):
temp1 = 0
for t in range(T - tau):
G=0.5*((y[t+tau,i]*y[t,j])+(y[t+tau,j]*y[t,i]))
temp1 = temp1 + G
CORR[i, j] = temp1/(T - tau)
return CORR
y = np.array([[1,2,3], [4,5,6], [6,7,8], [13,14,15]])
print(y)
result = function_AutoCorr(1, y)
print(result)
The resulting CORR matrix for this example is:
If you want to run the function for different tau values, you could do, in Python:
result = [function_AutoCorr(tau, y) for tau in range(1, 11)]
The result will be a list of autocorrelation matrices, which are numpy arrays. This syntax is called a list comprehension.
You'll probably want to use NumPy. They even have a guide for Matlab users.
Here are some useful tips.
Defining a function
def auto_corr(tau, y):
"""Generate matrix of correlations"""
# Do calculations
return corr
Get the size of a numpy array
n_rows, n_cols = y.shape
Indexing
Indexing is 0-based and uses brackets ([]) instead of parentheses.
How can I write a recursive function to generate a vector X of size (1,n) as follows, where X_i is the i-th entry:
X_1 = Z_1 * E_1
X_i = max{B_(1,i) * X_1, ... , B_((i-1),i) * X_(i-1), Z_i} * E_i, i = 2,...,n,
where
Z = np.random.normal(0, 1,size = n)
E = np.random.lognormal(0, 1, size = n)
B = np.random.uniform(0,1,(n,n))
I do not have any experience with recursive functions, that is why I can not present any code with which I tried to solve this.
If you're working with numpy, then use all the power of numpy, not just the random module ;)
And if you work with vectors, then forget about recursion and use numpy's vectorised operations. For example, np.max gives you the maximum over an axis, np.dot gives you element-wise multiplication. You also have np.prod for the product of array elements over a given axis... Those are just examples that might fit your problem well. For a full documentation, https://docs.scipy.org/doc/numpy/
I got it, one does not need a recursion as #meowgoesthedog stated in the first comment.
import numpy as np
s=1000 # sample size
n=5
Z = np.random.normal(0, 1,size = (s,n))
B = np.random.uniform(0,1,(n,n))
E = np.random.lognormal(0, 1, size = (s,n))
X = np.zeros((s,n))
X[:,0] = Z[:,0]*E[:,0]
for k in range(s):
for l in range(1,n):
X[k,l] = max(np.max(X[k,:(l)] * B[:(l),l]), Z[k,l]) * E[k,l]
I have two vectors X = [a,b,c,d] and Y = [m,n,o]. I'd like to construct a matrix M where each element is an operation on each pair from X and Y. i.e.
M[j,i] = f(X[i], Y[j])
# e.g. where f(x,y) = x-y:
M :=
a-m b-m c-m d-m
a-n b-n c-n d-n
a-o b-o c-o d-o
I imagine I could do this with two tf.while_loop(), but that seems inefficient, I was wondering if there is a more compact and parallel way of doing this.
P.S. There is a slight complication that X and Y are in fact not vectors, but R2. i.e. each element in X and Y is itself a fixed length vector, and f(X, Y) performs f() element wise. Plus there is a batch component too.
I.e.
X.shape => [BATCH, I, K]
Y.shape => [BATCH, J, K]
M[batch, j, i, k] = f( X[batch, i, k], Y[batch, j, k] )
# e.g.:
= X[batch, i, k] - Y[batch, j, k]
this is using the python API btw
I found a way of doing this by increasing rank and using broadcasting. I still don't know if this is the most efficient way of doing it, but it's a heck of a lot better than using tf.while_loop I guess! I'm still open to suggestions / improvements.
X_expand = tf.expand_dims(X, 1)
Y_expand = tf.expand_dims(Y, 2)
# now I think M = f(X,Y) will broadcast each tensor to the higher dimension on each axis duplicating the data e.g.:
M = X-Y
I have two given arrays: x and y. I want to calculate correlation coefficient between two arrays as follows:
import numpy as np
from scipy.stats import pearsonr
x = np.array([[[1,2,3,4],
[5,6,7,8]],
[[11,22,23,24],
[25,26,27,28]]])
i,j,k = x.shape
y = np.array([[[31,32,33,34],
[35,36,37,38]],
[[41,42,43,44],
[45,46,47,48]]])
xx = np.row_stack(np.dstack(x))
yy = np.row_stack(np.dstack(y))
results = []
for a, b in zip(xx,yy):
r_sq, p_val = pearsonr(a, b)
results.append(r_sq)
results = np.array(results).reshape(j,k)
print results
[[ 1. 1. 1. 1.]
[ 1. 1. 1. 1.]]
The answer is correct. However, would like to know if there are better and faster ways of doing it using numpy and/or scipy.
An alternate way (not necessarily better) is:
xx = x.reshape(2,-1).T # faster, minor issue though
yy = y.reshape(2,-1).T
results = [pearsonr(a,b)[0] for a,b in zip(xx,yy)]
results = np.array(results).reshape(x.shape[1:])
Another current thread was discussing the use of list comprehensions to iterate over values of an array(s): Confusion about numpy's apply along axis and list comprehensions
As discussed there, an alternative is to initialize results, and fill in values during the iteration. That's probably faster for really large cases, but for modest ones, this
np.array([... for .. in ...])
is reasonable.
The deeper question is whether pearsonr, or some alternative, can calculate this correlation for many pairs, rather than just one pair. That may require studying the internals of pearsonr, or other functions in stats.
Here's a first cut at vectorizing stats.pearsonr:
def pearsonr2(a,b):
# stats.pearsonr adapted for
# x and y are (N,2) arrays
n = x.shape[1]
mx = x.mean(1)
my = y.mean(1)
xm, ym = x-mx[:,None], y-my[:,None]
r_num = np.add.reduce(xm * ym, 1)
r_den = np.sqrt(stats.ss(xm,1) * stats.ss(ym,1))
r = r_num / r_den
r = np.clip(r, -1.0, 1.0)
return r
print pearsonr2(xx,yy)
It matches your case, though these test values don't really exercise the function. I just took the pearsonr code, added the axis=1 parameter in most of the lines, and made sure everything ran. The prob step could be included with some boolean masking.
(I can add the stats.pearsonr code to my answer if needed).
This version will take any dimension a,b (as long as they are the same), and do your pearsonr calc along the designated axis. No reshaping needed.
def pearsonr_flex(a,b, axis=1):
# stats.pearsonr adapted for
# x and y are (N,2) arrays
n = x.shape[axis]
mx = x.mean(axis, keepdims=True)
my = y.mean(axis, keepdims=True)
xm, ym = x-mx, y-my
r_num = np.add.reduce(xm * ym, axis)
r_den = np.sqrt(stats.ss(xm, axis) * stats.ss(ym, axis))
r = r_num / r_den
r = np.clip(r, -1.0, 1.0)
return r
pearsonr_flex(xx, yy, 1)
preasonr_flex(x, y, 0)
I have two 2D array, x(ni, nj) and y(ni,nj), that I need to interpolate over one axis. I want to interpolate along last axis for every ni.
I wrote
import numpy as np
from scipy.interpolate import interp1d
z = np.asarray([200,300,400,500,600])
out = []
for i in range(ni):
f = interp1d(x[i,:], y[i,:], kind='linear')
out.append(f(z))
out = np.asarray(out)
However, I think this method is inefficient and slow due to loop if array size is too large. What is the fastest way to interpolate multi-dimensional array like this? Is there any way to perform linear and cubic interpolation without loop? Thanks.
The method you propose does have a python loop, so for large values of ni it is going to get slow. That said, unless you are going to have large ni you shouldn't worry much.
I have created sample input data with the following code:
def sample_data(n_i, n_j, z_shape) :
x = np.random.rand(n_i, n_j) * 1000
x.sort()
x[:,0] = 0
x[:, -1] = 1000
y = np.random.rand(n_i, n_j)
z = np.random.rand(*z_shape) * 1000
return x, y, z
And have tested them with this two versions of linear interpolation:
def interp_1(x, y, z) :
rows, cols = x.shape
out = np.empty((rows,) + z.shape, dtype=y.dtype)
for j in xrange(rows) :
out[j] =interp1d(x[j], y[j], kind='linear', copy=False)(z)
return out
def interp_2(x, y, z) :
rows, cols = x.shape
row_idx = np.arange(rows).reshape((rows,) + (1,) * z.ndim)
col_idx = np.argmax(x.reshape(x.shape + (1,) * z.ndim) > z, axis=1) - 1
ret = y[row_idx, col_idx + 1] - y[row_idx, col_idx]
ret /= x[row_idx, col_idx + 1] - x[row_idx, col_idx]
ret *= z - x[row_idx, col_idx]
ret += y[row_idx, col_idx]
return ret
interp_1 is an optimized version of your code, following Dave's answer. interp_2 is a vectorized implementation of linear interpolation that avoids any python loop whatsoever. Coding something like this requires a sound understanding of broadcasting and indexing in numpy, and some things are going to be less optimized than what interp1d does. A prime example being finding the bin in which to interpolate a value: interp1d will surely break out of loops early once it finds the bin, the above function is comparing the value to all bins.
So the result is going to be very dependent on what n_i and n_j are, and even how long your array z of values to interpolate is. If n_j is small and n_i is large, you should expect an advantage from interp_2, and from interp_1 if it is the other way around. Smaller z should be an advantage to interp_2, longer ones to interp_1.
I have actually timed both approaches with a variety of n_i and n_j, for z of shape (5,) and (50,), here are the graphs:
So it seems that for z of shape (5,) you should go with interp_2 whenever n_j < 1000, and with interp_1 elsewhere. Not surprisingly, the threshold is different for z of shape (50,), now being around n_j < 100. It seems tempting to conclude that you should stick with your code if n_j * len(z) > 5000, but change it to something like interp_2 above if not, but there is a great deal of extrapolating in that statement! If you want to further experiment yourself, here's the code I used to produce the graphs.
n_s = np.logspace(1, 3.3, 25)
int_1 = np.empty((len(n_s),) * 2)
int_2 = np.empty((len(n_s),) * 2)
z_shape = (5,)
for i, n_i in enumerate(n_s) :
print int(n_i)
for j, n_j in enumerate(n_s) :
x, y, z = sample_data(int(n_i), int(n_j), z_shape)
int_1[i, j] = min(timeit.repeat('interp_1(x, y, z)',
'from __main__ import interp_1, x, y, z',
repeat=10, number=1))
int_2[i, j] = min(timeit.repeat('interp_2(x, y, z)',
'from __main__ import interp_2, x, y, z',
repeat=10, number=1))
cs = plt.contour(n_s, n_s, np.transpose(int_1-int_2))
plt.clabel(cs, inline=1, fontsize=10)
plt.xlabel('n_i')
plt.ylabel('n_j')
plt.title('timeit(interp_2) - timeit(interp_1), z.shape=' + str(z_shape))
plt.show()
One optimization is to allocate the result array once like so:
import numpy as np
from scipy.interpolate import interp1d
z = np.asarray([200,300,400,500,600])
out = np.zeros( [ni, len(z)], dtype=np.float32 )
for i in range(ni):
f = interp1d(x[i,:], y[i,:], kind='linear')
out[i,:]=f(z)
This will save you some memory copying that occurs in your implementation, which occurs in the calls to out.append(...).