Numpy matrix of pairwise sums - python

Somewhat often I am in the situation where I have two one-dimensional arrays X and Y, and I would like to construct a matrix Z defined by
Z[i,j]=X[i]+Y[j]
Now this is not difficult to do, for example
aux=np.outer(np.ones(len(X)), X)
aux2=np.outer(Y,np.ones(len(Y)))
Z=aux+aux2
My question is whether there is a less verbose way to get this result?

Related

How to vectorize a 2D scalar function over a mesh

I have a function foo(x,y) that takes two scalars (or lists of scalars) and returns a scalar output (or list of scalars computed pairwise from the input). I want to be able to evaluate this function over 2 orthogonal arrays such that the output is a matrix ij of foo(x[i], y[j]).
I have a for-loop version that solves this problem as below:
import numpy as np
x = np.range(50) # Could be linspaces, whatever the axis in the vector space is
y = np.range(50)
mat = np.zeros(len(x), len(y)) # To hold the result for plotting
for i in range(len(x)):
for j in range(len(y)):
mat[i][j] = foo(x[i], y[j])
where my result is stored in mat. However, this is dreadfully slow, and looks to me as if it could easily be vectorized. I'm not aware of how Python solves this problem however, as this doesn't appear to be something like zip or map. Is there another such function or concept (beyond trivially making extremely long arrays of the same array rotated by a value and passing them that way) that could vectorize this successfully? Or is the nature of the foo function limiting the ability to vectorize this?
In this case, itertools.product is the tool you want. It generates an iterable sequence of elements from the Cartesian product of N inputs, which you can use to discretely map a vector space. You can then evaluate foo on these. This isn't vectorization per se, but does reduce the nested for loop.
See docs at https://docs.python.org/3/library/itertools.html#itertools.product

Numpy array and matrix multiplication

I am trying to get rid of the for loop and instead do an array-matrix multiplication to decrease the processing time when the weights array is very large:
import numpy as np
sequence = [np.random.random(10), np.random.random(10), np.random.random(10)]
weights = np.array([[0.1,0.3,0.6],[0.5,0.2,0.3],[0.1,0.8,0.1]])
Cov_matrix = np.matrix(np.cov(sequence))
results = []
for w in weights:
result = np.matrix(w)*Cov_matrix*np.matrix(w).T
results.append(result.A)
Where:
Cov_matrix is a 3x3 matrix
weights is an array of n lenght with n 1x3 matrices in it.
Is there a way to multiply/map weights to Cov_matrix and bypass the for loop? I am not very familiar with all the numpy functions.
I'd like to reiterate what's already been said in another answer: the np.matrix class has much more disadvantages than advantages these days, and I suggest moving to the use of the np.array class alone. Matrix multiplication of arrays can be easily written using the # operator, so the notation is in most cases as elegant as for the matrix class (and arrays don't have several restrictions that matrices do).
With that out of the way, what you need can be done in terms of a call to np.einsum. We need to contract certain indices of three matrices while keeping one index alone in two matrices. That is, we want to perform w_{ij} * Cov_{jk} * w.T_{ki} with a summation over j, k, giving us an array with i indices. The following call to einsum will do:
res = np.einsum('ij,jk,ik->i', weights, Cov_matrix, weights)
Note that the above will give you a single 1d array, whereas you originally had a list of arrays with shape (1,1). I suspect the above result will even make more sense. Also, note that I omitted the transpose in the second weights argument, and this is why the corresponding summation indices appear as ik rather than ki. This should be marginally faster.
To prove that the above gives the same result:
In [8]: results # original
Out[8]: [array([[0.02803215]]), array([[0.02280609]]), array([[0.0318784]])]
In [9]: res # einsum
Out[9]: array([0.02803215, 0.02280609, 0.0318784 ])
The same can be achieved by working with the weights as a matrix and then looking at the diagonal elements of the result. Namely:
np.diag(weights.dot(Cov_matrix).dot(weights.transpose()))
which gives:
array([0.03553664, 0.02394509, 0.03765553])
This does more calculations than necessary (calculates off-diagonals) so maybe someone will suggest a more efficient method.
Note: I'd suggest slowly moving away from np.matrix and instead work with np.array. It takes a bit of getting used to not being able to do A*b but will pay dividends in the long run. Here is a related discussion.

Why is numpys covariance totally different then mine?

I calculate my covariance with following formula:
np.dot(X_zero_mean, X_zero_mean.T) / (X_zero_mean.shape[0] -1)
and compare it to
np.cov(X_zero_mean.T)
I both print the resulting matrices to console and create a figure out of them, but they are not the same. Why? Could it be that cov avoids some numerical error, that is happening with my above formula?
First one is my covariance, second one is the numpy cov:
Without your exact matrices it's difficult to tell, but I would guess that it's because you're taking the transpose of the matrix before passing it to np.cov. That would also explain why numpy's result looks like it's of much higher dimension than yours. np.cov(X.T) is equivalent to np.dot(X.T, X), not np.dot(X, X.T).

Outer product of each column of a 2D array to form a 3D array - NumPy

Let X be a M x N matrix. Denote xi the i-th column of X. I want to create a 3 dimensional N x M x M array consisting of M x M matrices xi.dot(xi.T).
How can I do it most elegantly with numpy? Is it possible to do this using only matrix operations, without loops?
One approach with broadcasting -
X.T[:,:,None]*X.T[:,None]
Another with broadcasting and swapping axes afterwards -
(X[:,None,:]*X).swapaxes(0,2)
Another with broadcasting and a multi-dimensional transpose afterwards -
(X[:,None,:]*X).T
Another approach with np.einsum, which might be more intuitive thinking in terms of the iterators involved if you are translating from a loopy code -
np.einsum('ij,kj->jik',X,X)
Basic idea in all of these approaches is that we spread out the last axis for elementwise multiplication against each other keeping the first axis aligned. We achieve this process of putting against each other by extending X to two 3D array versions.

NumPy - Dot Product along 3rd dimension without copying

I am trying to vectorize a function that takes as its input a 3-Component vector "x" and a 3x3 "matrix" and produces the scalar
def myfunc(x, matrix):
return np.dot(x, np.dot(matrix, x))
However this needs to be called "n" times, and the vector x has different components each time. I would like to modify this function such that it takes as input some 3xn arrays (the columns of which are the vectors x) and produces a vector whose components are the scalars that would have been computed at each iteration.
I can write down an Einstein summation that does the job but it requires that I construct a 3x3xn stack of "copies" of the original 3x3. I am concerned that doing this will blow away any performance gains I get from trying to do this. Is there any way to compute the vector I want without making copies of the 3x3?
Let x be the 3xN array and y be the 3x3 array. You're looking for
z = numpy.einsum('ji,jk,ki->i', x, y, x)
You also could have built that 3x3xN array you were talking about as a view of y to avoid copying, but it isn't necessary.

Categories