I am trying to convert code that contains the \ operator from Matlab (Octave) to Python. Sample code
B = [2;4]
b = [4;4]
B \ b
This works and produces 1.2 as an answer. Using this web page
http://mathesaurus.sourceforge.net/matlab-numpy.html
I translated that as:
import numpy as np
import numpy.linalg as lin
B = np.array([[2],[4]])
b = np.array([[4],[4]])
print lin.solve(B,b)
This gave me an error:
numpy.linalg.linalg.LinAlgError: Array must be square
How come Matlab \ works with non square matrix for B?
Any solutions for this?
From MathWorks documentation for left matrix division:
If A is an m-by-n matrix with m ~= n and B is a column vector with m
components, or a matrix with several such columns, then X = A\B is the
solution in the least squares sense to the under- or overdetermined
system of equations AX = B. In other words, X minimizes norm(A*X - B),
the length of the vector AX - B.
The equivalent in numpy is np.linalg.lstsq:
In [15]: B = np.array([[2],[4]])
In [16]: b = np.array([[4],[4]])
In [18]: x,resid,rank,s = np.linalg.lstsq(B,b)
In [19]: x
Out[19]: array([[ 1.2]])
Matlab will actually do a number of different operations when the \ operator is used, depending on the shape of the matrices involved (see here for more details). In you example, Matlab is returning a least squares solution, rather than solving the linear equation directly, as would happen with a square matrix. To get the same behaviour in numpy, do this:
import numpy as np
import numpy.linalg as lin
B = np.array([[2],[4]])
b = np.array([[4],[4]])
print np.linalg.lstsq(B,b)[0]
which should give you the same solution as Matlab.
You can form the left inverse:
import numpy as np
import numpy.linalg as lin
B = np.array([[2],[4]])
b = np.array([[4],[4]])
B_linv = lin.solve(B.T.dot(B), B.T)
c = B_linv.dot(b)
print('c\n', c)
Result:
c
[[ 1.2]]
Actually, we can simply run the solver once, without forming an inverse, like this:
c = lin.solve(B.T.dot(B), B.T.dot(b))
print('c\n', c)
Result:
c
[[ 1.2]]
.... as before
Why? Because:
We have:
Multiply through by B.T, gives us:
Now, B.T.dot(B) is square, full rank, does have an inverse. And therefore we can multiply through by the inverse of B.T.dot(B), or use a solver, as above, to get c.
Related
Given two sparse scipy matrices A, B I want to compute the row-wise outer product.
I can do this with numpy in a number of ways. The easiest perhaps being
np.einsum('ij,ik->ijk', A, B).reshape(n, -1)
or
(A[:, :, np.newaxis] * B[:, np.newaxis, :]).reshape(n, -1)
where n is the number of rows in A and B.
In my case, however, going through dense matrices eat up way too much RAM.
The only option I have found is thus to use a python loop:
sp.sparse.vstack((ra.T#rb).reshape(1,-1) for ra, rb in zip(A,B)).tocsr()
While using less RAM, this is very slow.
My question is thus, is there a sparse (RAM efficient) way to take the row-wise outer product of two matrices, which keeps things vectorized?
(A similar question is numpy elementwise outer product with sparse matrices but all answers there go through dense matrices.)
We can directly calculate the csr representation of the result. It's not superfast (~3 seconds on 100,000x768) but may be ok, depending on your use case:
import numpy as np
import itertools
from scipy import sparse
def spouter(A,B):
N,L = A.shape
N,K = B.shape
drows = zip(*(np.split(x.data,x.indptr[1:-1]) for x in (A,B)))
data = [np.outer(a,b).ravel() for a,b in drows]
irows = zip(*(np.split(x.indices,x.indptr[1:-1]) for x in (A,B)))
indices = [np.ravel_multi_index(np.ix_(a,b),(L,K)).ravel() for a,b in irows]
indptr = np.fromiter(itertools.chain((0,),map(len,indices)),int).cumsum()
return sparse.csr_matrix((np.concatenate(data),np.concatenate(indices),indptr),(N,L*K))
A = sparse.random(100,768,0.03).tocsr()
B = sparse.random(100,768,0.03).tocsr()
print(np.all(np.einsum('ij,ik->ijk',A.A,B.A).reshape(100,-1) == spouter(A,B).A))
A = sparse.random(100000,768,0.03).tocsr()
B = sparse.random(100000,768,0.03).tocsr()
from time import time
T = time()
C = spouter(A,B)
print(time()-T)
Sample run:
True
3.1073222160339355
Assume I have a set of vectors $ a_1, ..., a_d $ that are orthonormal to each other. Now, I want to find another vector $ a_{d+1} $ that is orthogonal to all the other vectors.
Is there an efficient algorithm to achieve this? I can only think of adding a random vector to the end, and then applying gram-schmidt.
Is there a python library which already achieves this?
Related. Can't speak to optimality, but here is a working solution. The good thing is that numpy.linalg does all of the heavy lifting, so this may be speedier and more robust than doing Gram-Schmidt by hand. Besides, this suggests that the complexity is not worse than Gram-Schmidt.
The idea:
Treat your input orthogonal vectors as columns of a matrix O.
Add another random column to O. Generically O will remain a full-rank matrix.
Choose b = [0, 0, ..., 0, 1] with len(b) = d + 1.
Solve a least-squares problem x O = b. Then, x is guaranteed to be non-zero and orthogonal to all original columns of O.
import numpy as np
from numpy.linalg import lstsq
from scipy.linalg import orth
# random matrix
M = np.random.rand(10, 5)
# get 5 orthogonal vectors in 10 dimensions in a matrix form
O = orth(M)
def find_orth(O):
rand_vec = np.random.rand(O.shape[0], 1)
A = np.hstack((O, rand_vec))
b = np.zeros(O.shape[1] + 1)
b[-1] = 1
return lstsq(A.T, b)[0]
res = find_orth(O)
if all(np.abs(np.dot(res, col)) < 10e-9 for col in O.T):
print("Success")
else:
print("Failure")
I have coded a kriging algorithm but I find it quite slow. Especially, do you have an idea on how I could vectorise the piece of code in the cons function below:
import time
import numpy as np
B = np.zeros((200, 6))
P = np.zeros((len(B), len(B)))
def cons():
time1=time.time()
for i in range(len(B)):
for j in range(len(B)):
P[i,j] = corr(B[i], B[j])
time2=time.time()
return time2-time1
def corr(x,x_i):
return np.exp(-np.sum(np.abs(np.array(x) - np.array(x_i))))
time_av = 0.
for i in range(30):
time_av+=cons()
print "Average=", time_av/100.
Edit: Bonus questions
What happens to the broadcasting solution if I want corr(B[i], C[j]) with C the same dimension than B
What happens to the scipy solution if my p-norm orders are an array:
p=np.array([1.,2.,1.,2.,1.,2.])
def corr(x, x_i):
return np.exp(-np.sum(np.abs(np.array(x) - np.array(x_i))**p))
For 2., I tried P = np.exp(-cdist(B, C,'minkowski', p)) but scipy is expecting a scalar.
Your problem seems very simple to vectorize. For each pair of rows of B you want to compute
P[i,j] = np.exp(-np.sum(np.abs(B[i,:] - B[j,:])))
You can make use of array broadcasting and introduce a third dimension, summing along the last one:
P2 = np.exp(-np.sum(np.abs(B[:,None,:] - B),axis=-1))
The idea is to reshape the first occurence of B to shape (N,1,M) while the second B is left with shape (N,M). With array broadcasting, the latter is equivalent to (1,N,M), so
B[:,None,:] - B
is of shape (N,N,M). Summing along the last index will then result in the (N,N)-shape correlation array you're looking for.
Note that if you were using scipy, you would be able to do this using scipy.spatial.distance.cdist (or, equivalently, a combination of scipy.spatial.distance.pdist and scipy.spatial.distance.squareform), without unnecessarily computing the lower triangular half of this symmetrix matrix. Using #Divakar's suggestion in comments for the simplest solution this way:
from scipy.spatial.distance import cdist
P3 = 1/np.exp(cdist(B, B, 'minkowski',1))
cdist will compute the Minkowski distance in 1-norm, which is exactly the sum of the absolute values of coordinate differences.
I am trying to solve a matrix equation such as A.B = C. The A is the unknown matrix and i must find it.
I have B(n*1) and C(n*1), so A must be n*n.
I used the BT* A.T =C.T method (numpy.linalg.solve(B.T, C.T)).
But it produces an error:
LinAlgError: Last 2 dimensions of the array must be square.
So the problem is that B isn't square.
Here's a little example for you:
import numpy as np
a = np.array([[1, 2], [3, 4]])
b = np.array([5, 6])
x = np.linalg.solve(a, b)
print "A={0}".format(a)
print "B={0}".format(b)
print "x={0}".format(x)
For more information, please read the docs
If you're solving for the matrix, there is an infinite number of solutions (assuming that B is nonzero). Here's one of the possible solutions:
Choose an nonzero element of B, Bi. Now construct a matrix A such that the ith column is C / Bi, and the other columns are zero.
It should be easy to verify that multiplying this matrix by B gives C.
What's the easiest way to get the DFT matrix for 2-d DFT in python? I could not find such function in numpy.fft. Thanks!
The easiest and most likely the fastest method would be using fft from SciPy.
import scipy as sp
def dftmtx(N):
return sp.fft(sp.eye(N))
If you know even faster way (might be more complicated) I'd appreciate your input.
Just to make it more relevant to the main question - you can also do it with numpy:
import numpy as np
dftmtx = np.fft.fft(np.eye(N))
When I had benchmarked both of them I have an impression scipy one was marginally faster but I
have not done it thoroughly and it was sometime ago so don't take my word for it.
Here's pretty good source on FFT implementations in python:
http://nbviewer.ipython.org/url/jakevdp.github.io/downloads/notebooks/UnderstandingTheFFT.ipynb
It's rather from speed perspective, but in this case we can actually see that sometimes it comes with simplicity too.
I don't think this is built in. However, direct calculation is straightforward:
import numpy as np
def DFT_matrix(N):
i, j = np.meshgrid(np.arange(N), np.arange(N))
omega = np.exp( - 2 * pi * 1J / N )
W = np.power( omega, i * j ) / sqrt(N)
return W
EDIT For a 2D FFT matrix, you can use the following:
x = np.zeros(N, N) # x is any input data with those dimensions
W = DFT_matrix(N)
dft_of_x = W.dot(x).dot(W)
As of scipy 0.14 there is a built-in scipy.linalg.dft:
Example with 16 point DFT matrix:
>>> import scipy.linalg
>>> import numpy as np
>>> m = scipy.linalg.dft(16)
Validate unitary property, note matrix is unscaled thus 16*np.eye(16):
>>> np.allclose(np.abs(np.dot( m.conj().T, m )), 16*np.eye(16))
True
For 2D DFT matrix, it's just a issue of tensor product, or specially, Kronecker Product in this case, as we are dealing with matrix algebra.
>>> m2 = np.kron(m, m) # 256x256 matrix, flattened from (16,16,16,16) tensor
Now we can give it a tiled visualization, it's done by rearranging each row into a square block
>>> import matplotlib.pyplot as plt
>>> m2tiled = m2.reshape((16,)*4).transpose(0,2,1,3).reshape((256,256))
>>> plt.subplot(121)
>>> plt.imshow(np.real(m2tiled), cmap='gray', interpolation='nearest')
>>> plt.subplot(122)
>>> plt.imshow(np.imag(m2tiled), cmap='gray', interpolation='nearest')
>>> plt.show()
Result (real and imag part separately):
As you can see they are 2D DFT basis functions
Link to documentation
#Alex| is basically correct, I add here the version I used for 2-d DFT:
def DFT_matrix_2d(N):
i, j = np.meshgrid(np.arange(N), np.arange(N))
A=np.multiply.outer(i.flatten(), i.flatten())
B=np.multiply.outer(j.flatten(), j.flatten())
omega = np.exp(-2*np.pi*1J/N)
W = np.power(omega, A+B)/N
return W
Lambda functions work too:
dftmtx = lambda N: np.fft.fft(np.eye(N))
You can call it by using dftmtx(N). Example:
In [62]: dftmtx(2)
Out[62]:
array([[ 1.+0.j, 1.+0.j],
[ 1.+0.j, -1.+0.j]])
If you wish to compute the 2D DFT as a single matrix operation, it is necessary to unravel the matrix X on which you wish to compute the DFT into a vector, as each output of the DFT has a sum over every index in the input, and a single square matrix multiplication does not have this ability. Taking care to be sure we are handling the indices correctly, I find the following works:
M = 16
N = 16
X = np.random.random((M,N)) + 1j*np.random.random((M,N))
Y = np.fft.fft2(X)
W = np.zeros((M*N,M*N),dtype=np.complex)
hold = []
for m in range(M):
for n in range(N):
hold.append((m,n))
for j in range(M*N):
for i in range(M*N):
k,l = hold[j]
m,n = hold[i]
W[j,i] = np.exp(-2*np.pi*1j*(m*k/M + n*l/N))
np.allclose(np.dot(W,X.ravel()),Y.ravel())
True
If you wish to change the normalization to orthogonal, you can divide by 1/sqrt(MN) or if you wish to have the inverse transformation, just change the sign in the exponent.
This might be a little late, but there is a better alternative for creating the DFT matrix, that performs faster, using NumPy's vander
also, this implementation does not use loops (explicitly)
def dft_matrix(signal):
N = signal.shape[0] # num of samples
w = np.exp((-2 * np.pi * 1j) / N) # remove the '-' for inverse fourier
r = np.arange(N)
w_matrix = np.vander(w ** r, increasing=True) # faster than meshgrid
return w_matrix
if I'm not mistaken, the main improvement is that this method generates the elements of the power from the (already calculated) previous elements
you can read about vander in the documentation:
numpy.vander