SVD command in Python v/s MATLAB - python

I have m = 10, n = 5, A=randn(m,n);[U,S,V]=svd(A); This returns a full 10x5 S matrix in MATLAB whereas Python only returns S as a 5x1 array. How do I recover the complete S matrix in Python? I have tried looking up several StackOverflow posts online but surprisingly doesn't shed light on this.
Also, how much does a Python IDE matter? I use Spyder but have been told that Vim is perhaps the most common.
Thanks a lot.

To recover the Complete matrix you can do as follow :
import numpy as np
m = 10
n = 5
A=np.random.randn(m,n)
U,S,V =np.linalg.svd(A)
It's right that S.shape = (5,).
You want something similar to https://www.mathworks.com/help/matlab/ref/svd.html with A = 4x2 where final S = 4×2 too.
To do that you define a matrix B = np.zeros(A.shape). And you fill its diagonal with the element of S. By diagonal I mean where i==j as follow :
B = np.zeros(A.shape)
for i in range(m) :
for j in range(n) :
if i == j : B[i,j] = S[j]
Now B.shape = (10,5) as expected
Or in a more compact form :
C = np.array([[S[j] if i==j else 0 for j in range(n)] for i in range(m)])
I hope it helps
For the second question, I use gedit (standard text editor) running the code in ipython shell.
You can have a look to jupyter too

The SVD of a matrix can be written as
A = U S V^H
Where the ^H signifies the conjugate transpose. Matlab's svd command returns U, S and V, while numpy.linalg.svd returns U, the diagonal of S, and V^H. Thus, to get the same S and V as in Matlab you need to reconstruct the S and also get the V:
import numpy
m = 10
n = 5
A = numpy.random.randn(m, n)
U, sdiag, VH = numpy.linalg.svd(A)
S = numpy.zeros((m, n))
numpy.fill_diagonal(S, sdiag)
V = VH.T.conj() # if you know you have real values only you can leave out the .conj()

Related

Subtracting Two dimensional arrays using numpy broadcasting

I'm new to the numpy in general so this is an easy question however i'm clueless as how to solve it.
i'm trying to implement K nearest neighbor algorithm for classification of a Data set
there are to arrays named new_points and point that respectively have the shape of (30,4)
and (120,4) (with 4 being the total number of the properties of each element)
so i'm trying to calculate the distance between each new point and all old points using numpy.broadcasting
def calc_no_loop(new_points, points):
return np.sum((new_points-points)**2,axis=1)
#doesn't work here is log
ValueError: operands could not be broadcast together with shapes (30,4) (120,4)
however as per rules of broadcasting two array of shapes (30,4) and (120,4) are incompatible
so i would appreciate any insight on how to slove this (using .reshape prehaps - not sure)
please note: that i'have already implemented the same function using one and two loops but can't implement it without one
def calc_two_loops(new_points, points):
m, n = len(new_points), len(points)
d = np.zeros((m, n))
for i in range(m):
for j in range(n):
d[i, j] = np.sum((new_points[i] - points[j])**2)
return d
def calc_one_loop(new_points, points):
m, n = len(new_points), len(points)
d = np.zeros((m, n))
print(d)
for i in range(m):
d[i] = np.sum((new_points[i] - points)**2)
return d
Let's create an exapmle smaller in size:
nNew = 3; nOld = 5 # Number of new / old points
# New points
new_points = np.arange(100, 100 + nNew * 4).reshape(nNew, 4)
# Old points
points = np.arange(10, 10 + nOld * 8, 2).reshape(nOld, 4)
To compute the differences alone, run:
dfr = new_points[:, np.newaxis, :] - points[np.newaxis, :, :]
So far we have differences in each property of each point (every new point with every old point).
The shape of dfr is (3, 5, 4):
first dimension: the number of new point,
second dimension: the number of old point,
third dimension: the difference in each property.
Then, to sum squares of differences by points, run:
d = np.power(dfr, 2).sum(axis=2)
and this is your result.
For my sample data, the result is:
array([[31334, 25926, 21030, 16646, 12774],
[34230, 28566, 23414, 18774, 14646],
[37254, 31334, 25926, 21030, 16646]], dtype=int32)
So you have 30 new points, and 120 old points, so if I understand you correctly you want a shape(120,30) array result of distances.
You could do
import numpy as np
points = np.random.random(120*4).reshape(120,4)
new_points = np.random.random(30*4).reshape(30,4)
def calc_no_loop(new_points, points):
res = np.zeros([len(points[:,0]),len(new_points[:,0])])
for idx in range(len(points[:,0])):
res[idx,:] = np.sum((points[idx,:]-new_points)**2,axis=1)
return np.sqrt(res)
test = calc_no_loop(new_points,points)
print(np.shape(test))
print(test)
Which gives
(120, 30)
[[0.67166838 0.78096694 0.94983683 ... 1.00960301 0.48076185 0.56419991]
[0.88156338 0.54951826 0.73919191 ... 0.87757896 0.76305462 0.52486626]
[0.85271938 0.56085692 0.73063341 ... 0.97884167 0.90509791 0.7505591 ]
...
[0.53968258 0.64514941 0.89225849 ... 0.99278462 0.31861253 0.44615026]
[0.51647526 0.58611128 0.83298535 ... 0.86669406 0.64931403 0.71517123]
[1.08515826 0.64626221 0.6898687 ... 0.96882542 1.08075076 0.80144746]]
But from your function name above I get the notion that you do not want a loop? Then you could do this instead:
def calc_no_loop(new_points, points):
new_points1 = np.repeat(new_points[np.newaxis,...],len(points),axis=0)
points1 = np.repeat(points[:,np.newaxis,:],len(new_points),axis=1)
return np.sqrt(np.sum((new_points-points1)**2 ,axis=2))
test = calc_no_loop(new_points,points)
print(np.shape(test))
print(test)
which has output:
(120, 30)
[[0.67166838 0.78096694 0.94983683 ... 1.00960301 0.48076185 0.56419991]
[0.88156338 0.54951826 0.73919191 ... 0.87757896 0.76305462 0.52486626]
[0.85271938 0.56085692 0.73063341 ... 0.97884167 0.90509791 0.7505591 ]
...
[0.53968258 0.64514941 0.89225849 ... 0.99278462 0.31861253 0.44615026]
[0.51647526 0.58611128 0.83298535 ... 0.86669406 0.64931403 0.71517123]
[1.08515826 0.64626221 0.6898687 ... 0.96882542 1.08075076 0.80144746]]
i.e. the same result. Note that I added the np.sqrt() into the result which you may have forgotten in your example above.

OpenCV Python cv2.perspectiveTransform

I'm currently trying to video stabilization using OpenCV and Python.
I use the following function to calculate rotation:
def accumulate_rotation(src, theta_x, theta_y, theta_z, timestamps, prev, current, f, gyro_delay=None, gyro_drift=None, shutter_duration=None):
if prev == current:
return src
pts = []
pts_transformed = []
for x in range(10):
current_row = []
current_row_transformed = []
pixel_x = x * (src.shape[1] / 10)
for y in range(10):
pixel_y = y * (src.shape[0] / 10)
current_row.append([pixel_x, pixel_y])
if shutter_duration:
y_timestamp = current + shutter_duration * (pixel_y - src.shape[0] / 2)
else:
y_timestamp = current
transform = getAccumulatedRotation(src.shape[1], src.shape[0], theta_x, theta_y, theta_z, timestamps, prev,
current, f, gyro_delay, gyro_drift)
output = cv2.perspectiveTransform(np.array([[pixel_x, pixel_y]], dtype="float32"), transform)
current_row_transformed.append(output)
pts.append(current_row)
pts_transformed.append(current_row_transformed)
o = utilities.meshwarp(src, pts_transformed)
return o
I get the following error when it gets to output = cv2.perspectiveTransform(np.array([[pixel_x, pixel_y]], dtype="float32"), transform):
cv2.error: /Users/travis/build/skvark/opencv-python/opencv/modules/core/src/matmul.cpp:2271: error: (-215) scn + 1 == m.cols in function perspectiveTransform
Any help or suggestions would really be appreciated.
This implementation really needs to be changed in a future version, or the docs should be more clear.
From the OpenCV docs for perspectiveTransform():
src – input two-channel (...) floating-point array
Slant emphasis added by me.
>>> A = np.array([[0, 0]], dtype=np.float32)
>>> A.shape
(1, 2)
So we see from here that A is just a single-channel matrix, that is, two-dimensional. One row, two cols. You instead need a two-channel image, i.e., a three-dimensional matrix where the length of the third dimension is 2 or 3 depending on if you're sending in 2D or 3D points.
Long story short, you need to add one more set of brackets to make the set of points you're sending in three-dimensional, where the x values are in the first channel, and the y values are in the second channel.
>>> A = np.array([[[0, 0]]], dtype=np.float32)
>>> A.shape
(1, 1, 2)
Also, as suggested in the comments:
If you have an array points of shape (n_points, dimension) (i.e. dimension is 2 or 3), a nice way to re-format it for this use-case is points[np.newaxis]
It's not intuitive, and though it's documented, it's not very explicit on that point. That's all you need. I've answered an identical question before, but for the cv2.transform() function.

ValueError: object too deep for desired array, when I try to use scipy's convolve2d method

Here's the code:
x = range(-6,7)
tmp1 = []
for i in range(len(x)):
tmp1.append(math.exp(-(i*i)/(2*self.sigma*self.sigma)))
max_tmp1 = max(tmp1)
mod_tmp1 = []
for i in range(len(tmp1)):
mod_tmp1.append(max_tmp1 - i)
ht1 = np.kron(np.ones((9,1)),tmp1)
sht1 = sum(ht1.flatten(1))
mean = sht1/(13*9)
ht1 = ht1 - mean
ht1 = ht1/sht1
print ht1.shape
h = np.zeros((16,16))
for i in range(0, 9):
for j in range(0, 13):
h[i+3, j+1] = ht1[i, j]
for i in range(0, 10):
ag = 15*i
np.append(h, scipy.misc.imrotate(h, ag, 'bicubic'))
R = []
print h.shape
print self.img.shape
for i in range(0, 11):
print 'here'
R[i] = scipy.signal.convolve2d(self.img, h[i], mode = 'same')
rt = np.zeros(self.img.shape)
x, y = self.img.shape
The error I get states:
ValueError: object of too small depth for desired array
It looks to me as if the problem is that you're setting h up wrongly. I assume you want h[i] to be a 16x16 array suitable for convolving with, but that's not what you've actually made it, for a couple of different reasons.
I suggest you change the loop with the imrotate calls to this:
h = [scipy.misc.imrotate(h, 15*i, 'bicubic') for i in range(10)]
(What your existing code does is: first set up h as a single 16x16 array; then, repeatedly: compute a rotated version, "flatten" both h and that to make 256-element vectors, compute the result of appending them to make a 512-element vector, and throw the result away. numpy.append doesn't operate in place, and defaults to flattening its arguments before it appends. Neither of those is what you want!)
The list comprehension above will give you a 10-element Python list containing rotated versions of your convolution kernel.
... Oh, I see that your loop computing R actually wants 11 kernels, not 10. Make it range(11), then. (Your original code generated rotations of 0, 0, 15, 30, ..., 135 degrees, but I'm guessing 0, 15, 30, ..., 150 degrees is more likely to be what you want.)

Why does numpy.random.dirichlet() not accept multidimensional arrays?

On the numpy page they give the example of
s = np.random.dirichlet((10, 5, 3), 20)
which is all fine and great; but what if you want to generate random samples from a 2D array of alphas?
alphas = np.random.randint(10, size=(20, 3))
If you try np.random.dirichlet(alphas), np.random.dirichlet([x for x in alphas]), or np.random.dirichlet((x for x in alphas)), it results in a
ValueError: object too deep for desired array. The only thing that seems to work is:
y = np.empty(alphas.shape)
for i in xrange(np.alen(alphas)):
y[i] = np.random.dirichlet(alphas[i])
print y
...which is far from ideal for my code structure. Why is this the case, and can anyone think of a more "numpy-like" way of doing this?
Thanks in advance.
np.random.dirichlet is written to generate samples for a single Dirichlet distribution. That code is implemented in terms of the Gamma distribution, and that implementation can be used as the basis for a vectorized code to generate samples from different distributions. In the following, dirichlet_sample takes an array alphas with shape (n, k), where each row is an alpha vector for a Dirichlet distribution. It returns an array also with shape (n, k), each row being a sample of the corresponding distribution from alphas. When run as a script, it generates samples using dirichlet_sample and np.random.dirichlet to verify that they are generating the same samples (up to normal floating point differences).
import numpy as np
def dirichlet_sample(alphas):
"""
Generate samples from an array of alpha distributions.
"""
r = np.random.standard_gamma(alphas)
return r / r.sum(-1, keepdims=True)
if __name__ == "__main__":
alphas = 2 ** np.random.randint(0, 4, size=(6, 3))
np.random.seed(1234)
d1 = dirichlet_sample(alphas)
print "dirichlet_sample:"
print d1
np.random.seed(1234)
d2 = np.empty(alphas.shape)
for k in range(len(alphas)):
d2[k] = np.random.dirichlet(alphas[k])
print "np.random.dirichlet:"
print d2
# Compare d1 and d2:
err = np.abs(d1 - d2).max()
print "max difference:", err
Sample run:
dirichlet_sample:
[[ 0.38980834 0.4043844 0.20580726]
[ 0.14076375 0.26906604 0.59017021]
[ 0.64223074 0.26099934 0.09676991]
[ 0.21880145 0.33775249 0.44344606]
[ 0.39879859 0.40984454 0.19135688]
[ 0.73976425 0.21467288 0.04556287]]
np.random.dirichlet:
[[ 0.38980834 0.4043844 0.20580726]
[ 0.14076375 0.26906604 0.59017021]
[ 0.64223074 0.26099934 0.09676991]
[ 0.21880145 0.33775249 0.44344606]
[ 0.39879859 0.40984454 0.19135688]
[ 0.73976425 0.21467288 0.04556287]]
max difference: 5.55111512313e-17
I think you're looking for
y = np.array([np.random.dirichlet(x) for x in alphas])
for your list comprehension. Otherwise you're simply passing a python list or tuple. I imagine the reason numpy.random.dirichlet does not accept your list of alpha values is because it's not set up to - it already accepts an array, which it expects to have a dimension of k, as per the documentation.

How to solve an LCP (linear complementarity problem) in python?

Is there a good library to numericly solve an LCP in python ?
Edit: I need a working python code example because most libraries seem to only solve quadratic problems and i have problems converting an LCP into a QP.
For quadratic programming with Python, I use the qp-solver from cvxopt (source). Using this, it is straightforward to translate the LCP problem into a QP problem (see Wikipedia). Example:
from cvxopt import matrix, spmatrix
from cvxopt.blas import gemv
from cvxopt.solvers import qp
def append_matrix_at_bottom(A, B):
l = []
for x in xrange(A.size[1]):
for i in xrange(A.size[0]):
l.append(A[i+x*A.size[0]])
for i in xrange(B.size[0]):
l.append(B[i+x*B.size[0]])
return matrix(l,(A.size[0]+B.size[0],A.size[1]))
M = matrix([[ 4.0, 6, -4, 1.0 ],
[ 6, 1, 1.0 2.0 ],
[-4, 1.0, 2.5, -2.0 ],
[ 1.0, 2.0, -2.0, 1.0 ]])
q = matrix([12, -10, -7.0, 3])
I = spmatrix(1.0, range(M.size[0]), range(M.size[1]))
G = append_matrix_at_bottom(-M, -I) # inequality constraint G z <= h
h = matrix([x for x in q] + [0.0 for _x in range(M.size[0])])
sol = qp(2.0 * M, q, G, h) # find z, w, so that w = M z + q
if sol['status'] == 'optimal':
z = sol['x']
w = matrix(q)
gemv(M, z, w, alpha=1.0, beta=1.0) # w = M z + q
print(z)
print(w)
else:
print('failed')
Please note:
the code is totally untested, please check carefully;
there surely are better solution techniques than transforming LCP into QP.
Take a look at the scikit OpenOpt. It has an example of doing quadratic programming and I believe that it goes beyond SciPy's optimization routines. NumPy is required to use OpenOpt. I believe that the wikipedia page that you pointed us to for LCP describes how to solve a LCP by QP.
The best algorithm for solving MCPs (mixed nonlinear complementarity problems, more general than LCP) is the PATH solver: http://pages.cs.wisc.edu/~ferris/path.html
The PATH solver is available in matlab and GAMS. Both are coming with a python API. I have chosen to use GAMS because there is a free version. So here is a step by step solution to solve an LCP with the python API of GAMS. I used python 3.6:
Download and install GAMS: https://www.gams.com/download/
Install the API to python like here: https://www.gams.com/latest/docs/API_PY_TUTORIAL.html
I used conda, changed the directory to were the apifiles of python 3.6 were and entered
python setup.py install
Create a .gms-file (GAMS file) lcp_for_py.gms containing:
sets i;
alias(i,j);
parameters m(i,i),b(i);
$gdxin lcp_input
$load i m b
$gdxin
positive variables z(i);
equations Linear(i);
Linear(i).. sum(j,m(i,j)*z(j)) + b(i) =g= 0;
model lcp linear complementarity problem/Linear.z/;
options mcp = path;
solve lcp using mcp;
display z.L;
Your python code is like this:
import gams
#Set working directory, GamsWorkspace and the Database
worDir = "<THE PATH WHERE YOU STORED YOUR .GMS-FILE>" #"C:\documents\gams\"
ws=gams.GamsWorkspace(working_directory=worDir)
db=ws.add_database(database_name="lcp_input")
#Set the matrix and the vector of the LCP as lists
matrix = [[1,1],[2,1]]
vector = [0,-2]
#Create the Gams Set
index = []
for k in range(0,len(matrix)):
index.append("i"+str(k+1))
i = db.add_set("i",1,"number of decision variables")
for k in index:
i.add_record(k)
#Create a Gams Parameter named m and add records
m = db.add_parameter_dc("m", [i,i], "matrix of the lcp")
for k in range(0,len(matrix)):
for l in range(0,len(matrix[0])):
m.add_record([index[k],index[l]]).value = matrix[k][l]
#Create a Gams Parameter named b and add records
b = db.add_parameter_dc("b",[i],"bias of quadratics")
for k in range(0, len(vector)):
b.add_record(index[k]).value = vector[k]
#run the GamsJob using the Gams File and the database
lcp = ws.add_job_from_file("lcp_for_py.gms")
lcp.run(databases = db)
#Save the solution as a list an print it
z = []
for rec in lcp.out_db["z"]:
z.append(rec.level)
print(z)
OpenOpt has a free LCP solver written in Python + NumPy see http://openopt.org/LCP

Categories