CVXPY vector constraints - python

I am looking to implement constraints on my optimization variable :
X=variable(2)
How can I specify constraints on X components, like "X[i] <= 1" for example, which I tried but don't seem to work" ? I did not find anything in the cvxpy documentation on this specific case, although it seems pretty basic...
I tried this simple example :
import cvxpy
X=variable(2)
constraints = [x[0] <= 5,
x[1] <= 5]
obj=Maximize(x[0]+x[1])
Pb=Problem(obj, constraints)
but cvxpy does not find any solution
Thanks !

The documentation shows an example of this on the main page. You specify the constraints when you create the Problem. Here's a simple example:
import cvxpy
x = cvxpy.Variable(5)
constraints = [x[3] >= 3, x >= 0]
problem = cvxpy.Problem(cvxpy.Minimize(cvxpy.sum_squares(x)), constraints)
problem.solve()
x.value
Which outputs:
array([-0., -0., -0., 3., -0.])

the exact problem you described yields the expected solution:
import cvxpy as cvx
x = cvx.Variable(2)
constraints = [x[0] <= 5, x[1] <= 5]
obj = cvx.Maximize(x[0] + x[1])
prob = cvx.Problem(obj, constraints)
prob.solve()
10.0
x.value
array([5., 5.])

Related

Get different results from Pulp and Linprog

I am new to linear programming and trying both Pulp and (SciPy) Linprog. Each gives me different results.
I think it might be because Linprog is using interior-point method whereas Pulp is probably using simplex? If so, is there a way to get Pulp produce the same result is Linprog?
import pulp
from pulp import *
from scipy.optimize import linprog
# Pulp
# Upper bounds
r = {1: 11, 2: 11, 3: 7, 4: 11, 5: 7}
# Create the model
model = LpProblem(name="small-problem", sense=LpMaximize)
# Define the decision variables
x = {i: LpVariable(name=f"x{i}", lowBound=0, upBound=r[i]) for i in range(1, 6)}
# Add constraints
model += (lpSum(x.values()) <= 35, "headroom")
# Set the objective
model += lpSum([7 * x[1], 7 * x[2], 11 * x[3], 7 * x[4], 11 * x[5]])
# Solve the optimization problem
status = model.solve()
# Get the results
print(f"status: {model.status}, {LpStatus[model.status]}")
print(f"objective: {model.objective.value()}")
for var in x.values():
print(f"{var.name}: {var.value()}")
for name, constraint in model.constraints.items():
print(f"{name}: {constraint.value()}")
# linprog
c = [-7, -7, -11, -7, -11]
bounds = [(0, 11), (0, 11), (0, 7), (0, 11), (0, 7)]
A_ub = [[1, 1, 1, 1, 1]]
B_ub = [[35]]
res = linprog(c, A_ub=A_ub, b_ub=B_ub, bounds=bounds)
print(res)
Output from code above:
status: 1, Optimal
objective: 301.0
x1: 10.0
x2: 0.0
x3: 7.0
x4: 11.0
x5: 7.0
headroom: 0.0
con: array([], dtype=float64)
fun: -300.9999999581466
message: 'Optimization terminated successfully.'
nit: 4
slack: array([4.60956784e-09])
status: 0
success: True
x: array([7., 7., 7., 7., 7.])
Bonus question: How would I formulate a problem where I want to maximum values for x[i]'s given some constraints? Above I am trying to maximise sum of x[i]'s but wondering if there is a better way.
As #Erwin Kalvelagen has already pointed out in the comments not all LPs have a unique solution. In your case you have two groups of variables {x1, x2, x4} and {x3, x5} that have the same coefficients in all occurrences.
In your case it is optimal to use the maximal possible value for x3, x5 and what ever is still available towards 35 in your constraint is distributed between x1, x2, x4 arbitrarily (as it makes no difference for the objective).
Note that your pulp solution is a basic solution while your scipy solution is not. And yes, this likely is because the two use different algorithms to solve the problem.

Constrained Optimization Problem : Python

I am sure , there must be a simple solution that keeps evading me.
I have a function
f=ax+by+c*z
and a constraint
lx+my+n*z=B
Need to find the (x,y,z), that maximizes f subject to the constraint.
I also need
x,y,z>=0
I remember having seen a solution like this.
This example uses
a,b,c=2,4,10 and l,m,n=1,2,4 and B=5
Ideally, this should give me x=1,y=0 , z=1, such that f=12
import numpy as np
from scipy.optimize import minimize
def objective(x, sign=-1.0):
x1 = x[0]
x2 = x[1]
x3 = x[2]
return sign*((2*x1) + (4*x2)+(10*x3))
def constraint1(x, sign=1.0):
return sign*(1*x[0] +2*x[1]+4*x[2]- 5)
x0=[0,0,0]
b1 = (0,None)
b2 = (0,None)
b3=(0,None)
bnds= (b1,b2,b3)
con1 = {'type': 'ineq', 'fun': constraint1}
cons = [con1]
sol = minimize (objective,x0,method='SLSQP',bounds=bnds,constraints=cons)
print(sol)
This is generating bizarre solution. What am I missing ?
The problem as you stated originally without integer constraints can be solved simply and efficiently by linprog:
import scipy.optimize
c = [-2, -4, -10]
A_eq = [[1, 2, 4]]
b_eq = 5
# bounds are for non-negative values by default
scipy.optimize.linprog(c, A_eq=A_eq, b_eq=b_eq)
I would recommend against using more general purpose solvers to solve narrow problems like this as you will often encounter worse performance and sometimes unexpected results.
You need to change your constraint to an 'equality constraint'. Also, your problem didn't specify that integer answers were required, so there is a better non-integer answer to this knapsack problem. (I don't have much experience with scipy.optimize and I'm not sure if it can work integer LP problems.)
In [13]: con1 = {'type': 'eq', 'fun': constraint1}
In [14]: cons = [con1,]
In [15]: sol = minimize (objective,x0,method='SLSQP',bounds=bnds,constraints=cons)
In [16]: print(sol)
fun: -12.5
jac: array([ -2., -4., -10.])
message: 'Optimization terminated successfully.'
nfev: 10
nit: 2
njev: 2
status: 0
success: True
x: array([0. , 0. , 1.25])
Like Jeff said, scipy.optimize only works with linear programming problems.
You can try using PuLP instead for Integer Optimization problems:
from pulp import *
prob = LpProblem("F Problem", LpMaximize)
# a,b,c=2,4,10 and l,m,n=1,2,4 and B=5
a,b,c=2,4,10
l,m,n=1,2,4
B=5
# x,y,z>=0
x = LpVariable("x",0,None,LpInteger)
y = LpVariable("y",0,None,LpInteger)
z = LpVariable("z",0,None,LpInteger)
# f=ax+by+c*z
prob += a*x + b*y + c*z, "Objective Function f"
# lx+my+n*z=B
prob += l*x + m*y + n*z == B, "Constraint B"
# solve
prob.solve()
print("Status:", LpStatus[prob.status])
for v in prob.variables():
print(v.name, "=", v.varValue)
Documentation is here: enter link description here

Howto: CVXPY Matrix Inequality Constraints

I am trying to formulate an optimization problem in the following way:
My optimization variable x is a n*n matrix.
x should be PSD.
It should be in the range 0<=x<=I. Meaning, it would be in the range from the all zeros square matrix to n dimensional identity matrix.
Here is what I have come up with so far:
import cvxpy as cp
import numpy as np
import cvxopt
x = cp.Variable((2, 2), PSD=True)
a = cvxopt.matrix([[1, 0], [0, 0]])
b = cvxopt.matrix([[.5, .5], [.5, .5]])
identity = cvxopt.matrix([[1, 0], [0, 1]])
zeros = cvxopt.matrix([[0, 0], [0, 0]])
constraints = [x >= zeros, x <= identity]
objective = cp.Maximize(cp.trace(x*a - x * b))
prob = cp.Problem(objective, constraints)
prob.solve()
This gives me a result of [[1, 0], [0, 0]] as the optimal x, with a maximum trace of .5. But that should not be the case. Because I have done this same program in CVX in matlab and I got the answer matrix as [[.85, -.35], [-.35, .14]] with an optimal value of .707. Which is correct.
I think my constraint formulation is not correct or not following cvxpy standards. How do I enforce the constraints in my program correctly?
(Here is my matlab version of the code:)
a = [1, 0; 0, 0];
b = [.5, .5; .5, .5];
cvx_begin sdp
variable x(2, 2) hermitian;
maximize(trace(x*a - x*b))
subject to
x >= 0;
x <= eye(2);
cvx_end
TIA
You need to use the PSD constraint. If you compare a matrix against a scalar, cvxpy does elementwise inequalities unless you use >> or <<. You already have constrained x to be PSD when you created it so all you need to change is:
constraints = [x << np.eye(2)]
Then I get your solution:
array([[ 0.85355339, -0.35355339],
[-0.35355339, 0.14644661]])

Multiplying Block Matrices in Numpy

Hi Everyone I am python newbie
I have to implement lasso L1 regression for a class assignment. This involves solving a quadratic equation involving block matrices.
minimize x^t * H * x + f^t * x
where x > 0
Where H is a 2 X 2 block matrix with each element being a k dimensional matrix and x and f being a 2 X 1 vectors each element being a k dimension vector.
I was thinking of using ndarrays.
Such that :
np.shape(H) = (2, 2, k, k)
np.shape(x) = (2, k)
But I figured out that np.dot(X, H) doesn't work here.
Is there an easy way to solve this problem? Thanks in advance.
First of all, I am convinced that converting to matrices will lead to more efficient computations. Stating that, if you consider your 2k x 2k matrix being a 2 x 2 matrix, then you operate in a tensor product of vector spaces, and have to use tensordot instead of dot.
Let give it a try, with k=5 for example:
>>> import numpy as np
>>> k = 5
Define our matrix a and vector x
>>> a = np.arange(1.*2*2*k*k).reshape(2,2,k,k)
>>> x = np.arange(1.*2*k).reshape(2,k)
>>> x
array([[ 0., 1., 2., 3., 4.],
[ 5., 6., 7., 8., 9.]])
now we can multipy our tensors. Be sure to choose right axes, I didn't tested following formula explicetely, and there might be an error
>>> result = np.tensordot(a,x,([1,3],[0,1]))
>>> result
array([[ 985., 1210., 1435., 1660., 1885.],
[ 3235., 3460., 3685., 3910., 4135.]])
>>> np.shape(result)
(2, 5)
np.einsum gives good control over which axes are summed.
np.einsum('ijkl,jk',H,x)
is one possible (generalized) dot product, (2,4) (first and last dim of H)
np.einsum('ijkl,jl',H,x)
is another. You need to be explicit - which dimensions of x go with which of H.

Numerical integration Loop Python

I would like to write a program that solves the definite integral below in a loop which considers a different value of the constant c per iteration.
I would then like each solution to the integral to be outputted into a new array.
How do I best write this program in python?
with limits between 0 and 1.
from scipy import integrate
integrate.quad
Is acceptable here. My major struggle is structuring the program.
Here is an old attempt (that failed)
# import c
fn = 'cooltemp.dat'
c = loadtxt(fn,unpack=True,usecols=[1])
I=[]
for n in range(len(c)):
# equation
eqn = 2*x*c[n]
# integrate
result,error = integrate.quad(lambda x: eqn,0,1)
I.append(result)
I = array(I)
For instance to compute the given integral for c in [0, 9] :
[scipy.integrate.quadrature(lambda x: 2 * c * x, 0, 1)[0] for c in xrange(10)]
This is using list comprehension and lambda functions.
Alternatively, you could define the function which returns the integral from a given c as a ufunc (thanks to vectorize). This is perhaps more in the spirit of numpy.
>>> func = lambda c: scipy.integrate.quadrature(lambda x: 2 * c * x, 0, 1)[0]
>>> ndfunc = np.vectorize(func)
>>> ndfunc(np.arange(10))
array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9.])
You're really close.
fn = 'cooltemp.dat'
c_values = loadtxt(fn,unpack=True,usecols=[1])
I=[]
for c in c_values: #can iterate over numpy arrays directly. No need for `range(len(...))`
# equation
#eqn = 2*x*c[n] #This doesn't work, x not defined yet.
# integrate
result,error = integrate.quad(lambda x: 2*c*x, 0, 1)
I.append(result)
I = array(I)
I think you're a little confused about how lambda works.
my_func = lambda x: 2*x
is the same thing as:
def my_func(x):
return 2*x
If you still don't like lambda, you can do this:
f(x,c):
return 2*x*c
#...snip...
integral, error = integrate.quad(f, 0, 1, args=(c,) )
constants = [1,2,3]
integrals = [] #alternatively {}
from scipy import integrate
def f(x,c):
2*x*c
for c in constants:
integral, error = integrate.quad(lambda x: f(x,c),0.,1.)
integrals.append(integral) #alternatively integrals[integral]
This will output a list of just like Nicolas answer, for whatever list of constants.

Categories