So I am trying to implement a simple optimization code in Python using the CVXPY package (an optimization problem with a linear matrix inequality constraint). The code is show below.
I have tried running the code using Python 3.6.
import cvxpy as cp
import numpy as np
import matplotlib.pyplot as plt
import control as cs
gamma = cp.Variable();
MAT1 = np.array([[1, gamma], [1+gamma, 3]])
constraints_2 = [MAT1 >> 0]
prob = cp.Problem(cp.Minimize(gamma),constraints_2)
prob.solve()
Every time I try to run this code, I get the following error:
"Non-square matrix in positive definite constraint."
But the matrix is clearly square! So I don't know what is happening.
Any ideas?
Your help is much appreciated!
MAT1 is a numpy array, you'll need to make it a cvxpy Variable to use the semidefinite constraint. Try this:
MAT1 = cp.Variable((2, 2))
constraints_2 = [MAT1 >> 0, MAT1[0, 0] == 1, MAT1[1, 0] == 1 + MAT1[0, 1], MAT1[1, 1] == 3]
prob = cp.Problem(cp.Minimize(MAT1[0, 1]), constraints_2)
gamma is then about -2.73
There is a technical and a conceptual problem here
Tecnical problem
The problem is that your MAT1 is not a numpy array
You could have written
MAT1=cvxpy.vstack([cvxpy.hstack([1 , gamma]), cvxpy.hstack([1+gamma, 3])])
Or more concisely
MAT1=cvxpy.vstack([cvxpy.hstack([1 , gamma]), cvxpy.hstack([1+gamma, 3])])
This way the cvxpy will accept your code, but will not give the correct answer.
Conceptual problem
The SDP problems are convex only for symmetric matrices, what cvxpy will do is to make it symmetric (apparently by adding it to it's transpose). The solution given is the minimum gamma such that [[1, 0.5+gamma], [0.5+gamma, 3]] >> 0.
Related
I found code for compute jacobian matrix from here and try it for non-linear system of equations.
import autograd.numpy as np
from autograd import jacobian
x = np.array([1,2], dtype=float)
def fs(x,y):
return np.array([x + 2*y - 2, x**2 + 4*y**2 - 4])
jacobian_cost = jacobian(fs)
jacobian_cost(x[0],x[1])
But instead of outputting result like this
array([ 1. 2.]
[ 2. 16.])
it outputting result like this
array([1., 2.])
I curious about whats wrong with this code, maybe i missing something.
You need to vectorize your function, and call it in a vectorized manner. So define
def fs(x):
return np.stack([x[0,:]+ 2*x[1, :]-2, x[0, :]**2+4*x[1,:]**2-4])
and then call
x = np.array([[1.0, 2]])
output = jacobian(fs)(x)
But note that the autograd library is very outdated, doesn't have a very good documentation and is no longer maintained. I'd recommend using PyTorch or JAX instead.
[python 2.7 and numpy v1.11.1] I am looking at matrix condition numbers and am trying to compute the condition number for a matrix without using the function np.linalg.cond().
Based on numpy's documentation, the definition of a matrix's condition number is, "the norm of x times the norm of the inverse of x."
||X|| * ||X^-1||
for the matrix
a = np.matrix([[1, 1, 1],
[2, 2, 1],
[3, 3, 0]])
print np.linalg.cond(a)
1.84814479698e+16
print np.linalg.norm(a) * np.linalg.norm(np.linalg.inv(a))
2.027453660713377e+17
Where is the mistake in my computation?
Thanks!
You are trying to compute the condition using the Frobenius Norm definition. That is an optional parameter to the condition computation.
print(np.linalg.norm(a)*np.linalg.norm(np.linalg.inv(a)))
print(np.linalg.cond(a, p='fro'))
Produces
2.02745366071e+17
2.02745366071e+17
norm uses the Frobenius norm for matrix by default,when cond uses 2-norm:
In [347]: np.linalg.cond(a)
Out[347]: 38.198730775206172
In [348]:np.linalg.norm(a,2)*np.linalg.norm(np.linalg.inv(a),2)
Out[348]: 38.198730775206243
In [349]: np.linalg.norm(a)*np.linalg.norm(np.linalg.inv(a))
Out[349]: 39.29814570824248
NumPy cond() is currently buggy. There will come a time when we will fix it but for now if you are doing this for linear equation solutions you can use SciPy linalg.solve which will either produce an error for exact singularity or a warning if reciprocal condition number is below threshold and nothing if the array is invertible.
I'm trying to build a toy recommendation engine to wrap my mind around Singular Value Decomposition (SVD). I've read enough content to understand the motivations and intuition behind the actual decomposition of the matrix A (a user x movie matrix).
I need to know more about what goes on after that.
from numpy.linalg import svd
import numpy as np
A = np.matrix([
[0, 0, 0, 4, 5],
[0, 4, 3, 0, 0],
...
])
U, S, V = svd(A)
k = 5 #dimension reduction
A_k = U[:, :k] * np.diag(S[:k]) * V[:k, :]
Three Questions:
Do the values of matrix A_k represent the the predicted/approximate ratings?
What role/ what steps does cosine similarity play in the recommendation?
And finally I'm using Mean Absolute Error (MAE) to calculate my error. But what I'm values am I comparing? Something like MAE(A, A_k) or something else?
Using Python & Numpy, I would like to:
Consider each row of an (n columns x
m rows) matrix as a vector
Weight each row (scalar
multiplication on each component of
the vector)
Add each row to create a final vector
(vector addition).
The weights are given in a regular numpy array, n x 1, so that each vector m in the matrix should be multiplied by weight n.
Here's what I've got (with test data; the actual matrix is huge), which is perhaps very un-Numpy and un-Pythonic. Can anyone do better? Thanks!
import numpy
# test data
mvec1 = numpy.array([1,2,3])
mvec2 = numpy.array([4,5,6])
start_matrix = numpy.matrix([mvec1,mvec2])
weights = numpy.array([0.5,-1])
#computation
wmatrix = [ weights[n]*start_matrix[n] for n in range(len(weights)) ]
vector_answer = [0,0,0]
for x in wmatrix: vector_answer+=x
Even a 'technically' correct answer has been all ready given, I'll give my straightforward answer:
from numpy import array, dot
dot(array([0.5, -1]), array([[1, 2, 3], [4, 5, 6]]))
# array([-3.5 -4. -4.5])
This one is much more on with the spirit of linear algebra (and as well those three dotted requirements on top of the question).
Update:
And this solution is really fast, not marginally, but easily some (10- 15)x faster than all ready proposed one!
It will be more convenient to use a two-dimensional numpy.array than a numpy.matrix in this case.
start_matrix = numpy.array([[1,2,3],[4,5,6]])
weights = numpy.array([0.5,-1])
final_vector = (start_matrix.T * weights).sum(axis=1)
# array([-3.5, -4. , -4.5])
The multiplication operator * does the right thing here due to NumPy's broadcasting rules.
I am trying to obtain the left inverse of a non-square matrix in python using either numpy or scipy.
How can I translate the following Matlab code to Python?
>> A = [0,1; 0,1; 1,0]
A =
0 1
0 1
1 0
>> y = [2;2;1]
y =
2
2
1
>> A\y
ans =
1.0000
2.0000
Is there a numpy or scipy equivalent of the left inverse \ operator in Matlab?
Use linalg.lstsq(A,y) since A is not square. See here for details. You can use linalg.solve(A,y) if A is square, but not in your case.
Here is a method that will work with sparse matrices (which from your comments is what you want) which uses the leastsq function from the optimize package
from numpy import *
from scipy.sparse import csr_matrix
from scipy.optimize import leastsq
from numpy.random import rand
A=csr_matrix([[0.,1.],[0.,1.],[1.,0.]])
b=array([[2.],[2.],[1.]])
def myfunc(x):
x.shape = (2,1)
return (A*x - b)[:,0]
print leastsq(myfunc,rand(2))[0]
generates
[ 1. 2.]
It is kind of ugly because of how I had to get the shapes to match up according to what leastsq wanted. Maybe someone else knows how to make this a little more tidy.
I have also tried to get something to work with the functions in scipy.sparse.linalg by using the LinearOperators, but to no avail. The problem is that all of those functions are made to handle square functions only. If anyone finds a way to do it that way, I would like to know as well.
For those who wish to solve large sparse least squares problems:
I have added the LSQR algorithm to SciPy. With the next release, you'll be able to do:
from scipy.sparse import csr_matrix
from scipy.sparse.linalg import lsqr
import numpy as np
A = csr_matrix([[0., 1], [0, 1], [1, 0]])
b = np.array([[2.], [2.], [1.]])
lsqr(A, b)
which returns the answer [1, 2].
If you'd like to use this new functionality without upgrading SciPy, you may download lsqr.py from the code repository at
http://projects.scipy.org/scipy/browser/trunk/scipy/sparse/linalg/isolve/lsqr.py
You can also look for the equivalent of the pseudo-inverse function pinv in numpy/scipy, as an alternative to the other answers that is.
You can calculate the left inverse using matrix calculations:
import numpy as np
linv_A = np.linalg.solve(A.T.dot(A), A.T)
(Why? Because:
)
Test:
np.set_printoptions(suppress=True, precision=3)
np.random.seed(123)
A = np.random.randn(3, 2)
print('A\n', A)
A_linv = np.linalg.solve(A.T.dot(A), A.T)
print('A_linv.dot(A)\n', A_linv.dot(A))
Result:
A
[[-1.086 0.997]
[ 0.283 -1.506]
[-0.579 1.651]]
A_linv.dot(A)
[[ 1. -0.]
[ 0. 1.]]
I haven't tested it, but according to this web page it is:
linalg.solve(A,y)
You can use lsqr from scipy.sparse.linalg to solve sparse matrix systems with least squares