I am trying to solve quadratic programming problem using IBM's Cplex Python API. The problem has non-linear constraints. Does Cplex accept non-linear constraint for quadratic programming? More specifically, given unknowns [x1,x2,x3,x4,x5], I need to put in two constraints
Constraint A (x2+x3) / (1-x1) = z1
Constraint B (x4+x5) / (1-x1) = z2
Where z1 and z2 are known numbers.
Cplex does have instructions on how to enter quadratic constraints, but none that I can find on entering non-linear constraints in general.
could
from docplex.mp.model import Model
mdl = Model(name='example')
z1=2;
z2=3;
mdl.x1 = mdl.continuous_var(0,10,name='x1')
mdl.x2 = mdl.continuous_var(0,10,name='x2')
mdl.x3 = mdl.continuous_var(0,10,name='x3')
mdl.x4 = mdl.continuous_var(0,10,name='x4')
mdl.x5 = mdl.continuous_var(0,10,name='x5')
mdl.add_constraint(mdl.x2+mdl.x3==z1*(1-mdl.x1), 'A')
mdl.add_constraint(mdl.x4+mdl.x5==z2*(1-mdl.x1), 'B')
mdl.solve()
print(mdl.x1.solution_value);
print(mdl.x2.solution_value);
print(mdl.x3.solution_value);
print(mdl.x4.solution_value);
print(mdl.x5.solution_value);
help ?
Related
I am working on an optimization problem and I am using gekko with an APOPT solver to solve it, sometimes and when I have a lot of variables, I get the following error "exception: #error: max equation length".
How could I know the "max equation length"?
Variables and equations are written as a text file to the temporary folder before they are compiled into byte-code for efficient calculation of residuals, objective functions, sparse 1st derivatives, and sparse 2nd derivatives with automatic differentiation. Equations are limited to 15,000 characters each in gekko models, not just with the APOPT solver. This limit could be extended, but it is to encourage model building methods that improve compilation and solution speed. A simple gekko model demonstrates the issue with x = sum(p) as a vector of 10,000 values.
from gekko import GEKKO
m = GEKKO(remote=False)
p = m.Array(m.Param,10000,value=1)
x = m.Var(lb=0,integer=True)
m.Equation(x==sum(p))
m.options.SOLVER = 1
m.open_folder()
m.solve()
This gives an error that the maximum equation length is exceeded.
Exception: #error: Max Equation Length
Error with line number: 10008
The run folder is opened with m.open_folder(). The gekko model is in the file gk_model0.apm that is inspected with an text editor. There are a few ways to reduce the equation length for this problem.
The parameters p can be an ordinary numpy array, instead of gekko parameters. The summation is pre-computed before it is written to the model file.
from gekko import GEKKO
import numpy as np
m = GEKKO(remote=False)
p = np.ones(10000)
x = m.Var(lb=0,integer=True)
m.Equation(x==sum(p))
m.options.SOLVER = 1
m.open_folder()
m.solve()
This gives the following gk_model0.apm model:
Model
Variables
int_v1 = 0, >= 0
End Variables
Equations
int_v1=10000.0
End Equations
End Model
If the parameters p need to be gekko parameters or variables, then use the m.sum() function instead of sum(). The use of m.sum() is more efficient than sum() or np.sum() for gekko optimization problems and avoids the problem with maximum equation length. The compilation time is longer than the first option.
from gekko import GEKKO
m = GEKKO(remote=False)
p = m.Array(m.Param,10000,value=1)
x = m.Var(lb=0,integer=True)
m.Equation(x==m.sum(p))
m.options.SOLVER = 1
m.solve()
Use m.Intermediate() when possible to reduce the equation size. This special type of intermediate variable incorporates model reduction principles where a quantity is explicitly calculated once and used in multiple places in the model. There are additional suggestions in questions / answers such as How to fix Python Gekko Max Equation Length error
It's not entirely lasso because I add an extra constraint but I'm not sure how I'm supposed to solve a problem like the following using cvxpy
import cvxpy as cp
import numpy as np
A = np.random.rand(5000,1000)
v0 = np.random.rand(1000,1)
v = cp.Variable(v0.shape)
iota = np.ones(v0.shape)
lam = 1
objective = cp.Minimize( (A#(v-v0)).T#(A#(v-v0)) + lam * cp.abs(v).T # iota )
constraints = [v >= 0]
prob = cp.Problem(objective, constraints)
res = prob.solve()
I tried various versions of this but this is the one that most clearly shows what I'm trying to do. I get the error:
DCPError: Problem does not follow DCP rules. Specifically: The objective is not DCP. Its following subexpressions are not: ....
And then an error I don't undeerstand haha.
CVXPY is a modeling language for convex optimization. Therefore, there is a set of rules your problem must follow to ensure your problem is convex indeed. These are what cvxpy refers to DCP: Disciplined Convex Programming. As the error suggests, your objective is not DCP.
To be more precise, the problem is in the objective (A#(v-v0)).T#(A#(v-v0)): cvxpy don't know it is indeed convex (from program point of view, it's just few multiplications).
To ensure your problem is DCP, it's best to use cvxpy atomic functions.
Essentially you are modeling x^T * x (if x=A#(v-v0)), which is the squares of norm 2 of a vector. cp.norm2 is the way to ensure cvxpy will know the problem is convex.
change the line of the objective function to:
objective = cp.Minimize(cp.norm2(A # (v - v0)) ** 2 + lam * cp.abs(v).T # iota)
and it works.
(Also take a look at cvxpy example of lasso regression)
I'm trying to solve a non-linear PDE HJB equation using FiPy, but i have some difficulties translating the PDE into the proper FiPy syntax:
I tried something like :
eqX = TransientTerm() == -DiffusionTerm(coeff=1) + (phi.faceGrad * phi.faceGrad)
and It doesn't work because of the square of the gradient
My equation: (du/dt = - \delta u + ||\grad(u)||^2)
Does FiPy allow to solve this kind of equations? if not is there a package or a way to solve it using finite difference ?
Thank you!
It's possible to recast the final term to be a diffusion term and a source term so that the equation can be rewritten as,
eqn = TransientTerm() = DiffusionTerm(u - 1) - u * u.faceGrad.divergence
That won't give an error, but might not be very stable
I have an optimization problem. all of my other constraints are linear but I have a constraint that is like this:
in this equation, s and r and k are constants that I have the values and a and s are unknown parameters.
actually, the objective function is:
and it has some other linear constraints.
I'm searching for a python package that can solve this problem and can make that constraint that I mentioned above as a parameter for the optimization problem.
I first searched for linear programming solutions but when I tried to make that constraint in pulp I got this error:
TypeError: Non-constant expressions cannot be multiplied
This is possible with PySCIPOpt:
from pyscipopt import Model
model = Model()
x = model.addVar("x")
y = model.addVar("y")
z = model.addVar("z")
model.setObjective(c)
model.addCons(x / (y*z) >= 0)
model.optimize()
You can formulate any polynomial expression in this way.
I am trying to setup a constraint which depends on the minimized function value.
The problem I have is of the following nature:
fmin = minimize (d1x1 +d2x2 ... +d5x5)
Where I want to optmize with following constraints:
x1+X2+x3+x4+x5 = 1
0.003 <x1 .. X5 < 0.05
d1x1/fmin = y1
(d2x2+d3x4)/fmin = y2
(d4x4+d5x5)/fmin = y3
Here in this case y1.. yn are scalar constants.
The problem I am having is that I dont know how to setup the A_ub or A_eq
In linprog so that B_ub = y1*fmin for d1x1 for example.
So somehow I need to define:
x1d1/fmin = y1 as one of the constraints.
Here the optimal value vector will be (d1 .. dn). However, this must also satisfy the constraint d1/minimized(d1.. dn) = y1 as an example here.
How should I set this up ? What kind of optimizer do I use ?
I am able to do this very easily using excel solver - but now I want to code this in python. I am trying using scipy.linprog but I am not sure if this is a linear programming problem or do I need to use another approach. I am not able to think of a way to setup the constraints in linprog for this problem. Can anyone help me ?
Assuming that d1, ..., dn are scalar constants too, then for instance the constraint
d1*x1/fmin==y1
can be rewritten as
d1*x1==y1*d1*x1+y1*d2*x2+...+y1*dn*xn
This can be normalized to
(d1-y1*d1)*x1 - y1*d2*x2 - y1*d3*x3 - ... - y1*dn*xn == 0
which can be used as input for a linear solver.