How to calculate the product of the diagonal in Sympy? - python

I have given a sympy matrix, so a matrix consisting of real values and symbolic values and I want to calculate the product of diagonal entries of the matrix; An example:
import numpy as np
import sympy as sp
from sympy import *
from sympy.matrices import Matrix
A=sp.Matrix([[1,0,1],[2,3,1],[y,5,x]])
Now the desired result would be 3x; Of course I could do that with a for loop but is there some other, cleaner solution?

You can use diagonal to get the diagonal elements and prod to multiply them:
In [46]: A.diagonal()
Out[46]: [1 3 x]
In [47]: prod(_)
Out[47]: 3⋅x

Oscar Benjamin provided a slick solution. This will work as well:
np.product(np.diag(A))

Related

Use numpy to sum indices based on another numpy vector

I am trying to sum specific indices per row in a numpy matrix, based on values in a second numpy vector. For example, in the image, there is the matrix A and the vector of indices inds. Here I want to sum:
A[0, inds[0]] + A[1, inds[1]] + A[2, inds[2]] + A[3, inds[3]]
I am currently using a python for loop, making the code quite slow. Is there a way to do this using vectorisation? Thanks!
Yes, numpy's magic indexing can do this. Just generate a range for the 1st dimension and use your coords for the second:
import numpy as np
x1 = np.array( [[1,2,3,4],[5,6,7,8],[9,10,11,12],[13,14,15,16]] )
print(x1[ [0,1,2,3],[2,0,3,1] ].sum())

Can I do (x_i-x_j)^T(x_i-x_j) for x_i, x_j are rows in a X matrix with numpy native function instead of loop

I need to compute in numpy where $x_i$ and $x_j$ are rows in a matrix $X$. Now I am using loop, which is very slow. Is there any numpy native function allows such computation, like einsum:
n=X.shape[0]
Y=np.zeros((n,n))
for i in range(n):
x=(X-X[i])**2
x=np.sum(x, axis=1)
Y[i]=x
return Y
BTW, I am very confused with einsum. Is there any good material for its introduction. The manual page on numpy was not very clear to me.
Approach #1
You can use broadcasting as a vectorized approach -
import numpy as np
Y = np.sum((X - X[:,None,:])**2,2)
This should be efficient with relatively smaller input arrays.
Approach #2
Seems like you are performing euclidean distance calculations and getting the squared distances. So, you can use distance.cdist like so -
import numpy as np
from scipy.spatial import distance
Y = distance.cdist(X, X, 'sqeuclidean')
This should be efficient with large input arrays.

Left Matrix Division and Numpy Solve

I am trying to convert code that contains the \ operator from Matlab (Octave) to Python. Sample code
B = [2;4]
b = [4;4]
B \ b
This works and produces 1.2 as an answer. Using this web page
http://mathesaurus.sourceforge.net/matlab-numpy.html
I translated that as:
import numpy as np
import numpy.linalg as lin
B = np.array([[2],[4]])
b = np.array([[4],[4]])
print lin.solve(B,b)
This gave me an error:
numpy.linalg.linalg.LinAlgError: Array must be square
How come Matlab \ works with non square matrix for B?
Any solutions for this?
From MathWorks documentation for left matrix division:
If A is an m-by-n matrix with m ~= n and B is a column vector with m
components, or a matrix with several such columns, then X = A\B is the
solution in the least squares sense to the under- or overdetermined
system of equations AX = B. In other words, X minimizes norm(A*X - B),
the length of the vector AX - B.
The equivalent in numpy is np.linalg.lstsq:
In [15]: B = np.array([[2],[4]])
In [16]: b = np.array([[4],[4]])
In [18]: x,resid,rank,s = np.linalg.lstsq(B,b)
In [19]: x
Out[19]: array([[ 1.2]])
Matlab will actually do a number of different operations when the \ operator is used, depending on the shape of the matrices involved (see here for more details). In you example, Matlab is returning a least squares solution, rather than solving the linear equation directly, as would happen with a square matrix. To get the same behaviour in numpy, do this:
import numpy as np
import numpy.linalg as lin
B = np.array([[2],[4]])
b = np.array([[4],[4]])
print np.linalg.lstsq(B,b)[0]
which should give you the same solution as Matlab.
You can form the left inverse:
import numpy as np
import numpy.linalg as lin
B = np.array([[2],[4]])
b = np.array([[4],[4]])
B_linv = lin.solve(B.T.dot(B), B.T)
c = B_linv.dot(b)
print('c\n', c)
Result:
c
[[ 1.2]]
Actually, we can simply run the solver once, without forming an inverse, like this:
c = lin.solve(B.T.dot(B), B.T.dot(b))
print('c\n', c)
Result:
c
[[ 1.2]]
.... as before
Why? Because:
We have:
Multiply through by B.T, gives us:
Now, B.T.dot(B) is square, full rank, does have an inverse. And therefore we can multiply through by the inverse of B.T.dot(B), or use a solver, as above, to get c.

python numpy euclidean distance calculation between matrices of row vectors

I am new to Numpy and I would like to ask you how to calculate euclidean distance between points stored in a vector.
Let's assume that we have a numpy.array each row is a vector and a single numpy.array. I would like to know if it is possible to calculate the euclidean distance between all the points and this single point and store them in one numpy.array.
Here is an interface:
points #2d list of row-vectors
singlePoint #one row-vector
listOfDistances= procedure( points,singlePoint)
Can we have something like this?
Or is it possible to have one command to have the single point as a list of other points and at the end we get a matrix of distances?
Thanks
To get the distance you can use the norm method of the linalg module in numpy:
np.linalg.norm(x - y)
While you can use vectorize, #Karl's approach will be rather slow with numpy arrays.
The easier approach is to just do np.hypot(*(points - single_point).T). (The transpose assumes that points is a Nx2 array, rather than a 2xN. If it's 2xN, you don't need the .T.
However this is a bit unreadable, so you write it out more explictly like this (using some canned example data...):
import numpy as np
single_point = [3, 4]
points = np.arange(20).reshape((10,2))
dist = (points - single_point)**2
dist = np.sum(dist, axis=1)
dist = np.sqrt(dist)
import numpy as np
def distance(v1, v2):
return np.sqrt(np.sum((v1 - v2) ** 2))
To apply a function to each element of a numpy array, try numpy.vectorize.
To do the actual calculation, we need the square root of the sum of squares of differences (whew!) between pairs of coordinates in the two vectors.
We can use zip to pair the coordinates, and sum with a comprehension to sum up the results. That looks like:
sum((x - y) ** 2 for (x, y) in zip(singlePoint, pointFromArray)) ** 0.5
import numpy as np
single_point = [3, 4]
points = np.arange(20).reshape((10,2))
distance = euclid_dist(single_point,points)
def euclid_dist(t1, t2):
return np.sqrt(((t1-t2)**2).sum(axis = 1))

Left inverse in numpy or scipy?

I am trying to obtain the left inverse of a non-square matrix in python using either numpy or scipy.
How can I translate the following Matlab code to Python?
>> A = [0,1; 0,1; 1,0]
A =
0 1
0 1
1 0
>> y = [2;2;1]
y =
2
2
1
>> A\y
ans =
1.0000
2.0000
Is there a numpy or scipy equivalent of the left inverse \ operator in Matlab?
Use linalg.lstsq(A,y) since A is not square. See here for details. You can use linalg.solve(A,y) if A is square, but not in your case.
Here is a method that will work with sparse matrices (which from your comments is what you want) which uses the leastsq function from the optimize package
from numpy import *
from scipy.sparse import csr_matrix
from scipy.optimize import leastsq
from numpy.random import rand
A=csr_matrix([[0.,1.],[0.,1.],[1.,0.]])
b=array([[2.],[2.],[1.]])
def myfunc(x):
x.shape = (2,1)
return (A*x - b)[:,0]
print leastsq(myfunc,rand(2))[0]
generates
[ 1. 2.]
It is kind of ugly because of how I had to get the shapes to match up according to what leastsq wanted. Maybe someone else knows how to make this a little more tidy.
I have also tried to get something to work with the functions in scipy.sparse.linalg by using the LinearOperators, but to no avail. The problem is that all of those functions are made to handle square functions only. If anyone finds a way to do it that way, I would like to know as well.
For those who wish to solve large sparse least squares problems:
I have added the LSQR algorithm to SciPy. With the next release, you'll be able to do:
from scipy.sparse import csr_matrix
from scipy.sparse.linalg import lsqr
import numpy as np
A = csr_matrix([[0., 1], [0, 1], [1, 0]])
b = np.array([[2.], [2.], [1.]])
lsqr(A, b)
which returns the answer [1, 2].
If you'd like to use this new functionality without upgrading SciPy, you may download lsqr.py from the code repository at
http://projects.scipy.org/scipy/browser/trunk/scipy/sparse/linalg/isolve/lsqr.py
You can also look for the equivalent of the pseudo-inverse function pinv in numpy/scipy, as an alternative to the other answers that is.
You can calculate the left inverse using matrix calculations:
import numpy as np
linv_A = np.linalg.solve(A.T.dot(A), A.T)
(Why? Because:
)
Test:
np.set_printoptions(suppress=True, precision=3)
np.random.seed(123)
A = np.random.randn(3, 2)
print('A\n', A)
A_linv = np.linalg.solve(A.T.dot(A), A.T)
print('A_linv.dot(A)\n', A_linv.dot(A))
Result:
A
[[-1.086 0.997]
[ 0.283 -1.506]
[-0.579 1.651]]
A_linv.dot(A)
[[ 1. -0.]
[ 0. 1.]]
I haven't tested it, but according to this web page it is:
linalg.solve(A,y)
You can use lsqr from scipy.sparse.linalg to solve sparse matrix systems with least squares

Categories