Solve Python linear mathematical optimization problem - vector - easy constraint - python

I do not know how to set up and solve the following linear optimization problem in Python. Can you help?
R and Beta are vectors of length n with known constants.
x is a vector of length n but is unknown - to be optimized.
constraints:
`B * x = 200 cross product (elementwise product then sum)
x(i)>0
find optimal values in vector x such that we maximize:
max(R * x)
It seems to be not a very complicated, but I did not succeed setting up the problem in the scipy library. Any help is appreciated.

That looks like a linear programming problem, take a look into the scipy library to solve that.

Related

How to solve separable differential equation using Sympy?

I cannot figure out how to solve this separable differential equation using sympy. Help would be greatly appreciated.
y′=(y−4)(y−2),y(0)=5
Here was my attempt, thanks in advance!!!
import sympy as sp
x,y,t = sp.symbols('x,y,t')
y_ = sp.Function('y_')(x)
diff_eq = sp.Eq(sp.Derivative(y_,x), (y-4)*(y-2))
ics = {y_.subs(x,0):5}
sp.dsolve(diff_eq, y_, ics = ics)
the output is y(x) = xy^2 -6xy +8x + 5
The primary error is the introduction of y_. This makes the variable y a constant parameter of the ODE and you get the wrong solution.
If you correct this you get an error of "too many solutions for the integration constant". This is a bug caused by not simplifying the integration constant after it first occurs. So multiplication and addition of constants should just be absorbed, an additive constant in an exponent should become a multiplicative factor for the exponential. As it is, exp(2*C_1)==3 has two solutions if C_1 is considered as an angle (it's a bit of tortured logic from computing roots in the complex plane).
The newer versions can actually solve this fully if you give the third hint in the classification list 'separable', '1st_exact', '1st_rational_riccati', ... that does something different than partial fraction decomposition of the first two
from sympy import *
x = Symbol('x')
y = Function('y')(x)
dsolve(Eq(y.diff(x), (y-2)*(y-4)),y,
ics={y.subs(x,0):5},
hint='1st_rational_riccati')
returning
\displaystyle y{\left(x \right)} = \frac{2 \cdot \left(6 - e^{2 x}\right)}{3 - e^{2 x}}

PuLP: Minimizing the standard deviation of decision variables

In an optimization problem developed in PuLP i use the following objective function:
objective = p.lpSum(vec[r] for r in range(0,len(vec)))
All variables are non-negative integers, hence the sum over the vector gives the total number of units for my problem.
Now i am struggling with the fact, that PuLP only gives one of many solutions and i would like to narrow down the solution space to results that favors the solution set with the smallest standard deviation of the decision variables.
E.g. say vec is a vector with elements 6 and 12. Then 7/11, 8/10, 9/9 are equally feasible solutions and i would like PuLP to arrive at 9/9.
Then the objective
objective = p.lpSum(vec[r]*vec[r] for r in range(0,len(vec)))
would obviously create a cost function, that would help the case, but alas, it is non-linear and PuLP throws an error.
Anyone who can point me to a potential solution?
Instead of minimizing the standard deviation (which is inherently non-linear), you could minimize the range or bandwidth. Along the lines of:
minimize maxv-minv
maxv >= vec[r] for all r
minv <= vec[r] for all r

Using cvxpy to solve a lasso like problem

It's not entirely lasso because I add an extra constraint but I'm not sure how I'm supposed to solve a problem like the following using cvxpy
import cvxpy as cp
import numpy as np
A = np.random.rand(5000,1000)
v0 = np.random.rand(1000,1)
v = cp.Variable(v0.shape)
iota = np.ones(v0.shape)
lam = 1
objective = cp.Minimize( (A#(v-v0)).T#(A#(v-v0)) + lam * cp.abs(v).T # iota )
constraints = [v >= 0]
prob = cp.Problem(objective, constraints)
res = prob.solve()
I tried various versions of this but this is the one that most clearly shows what I'm trying to do. I get the error:
DCPError: Problem does not follow DCP rules. Specifically: The objective is not DCP. Its following subexpressions are not: ....
And then an error I don't undeerstand haha.
CVXPY is a modeling language for convex optimization. Therefore, there is a set of rules your problem must follow to ensure your problem is convex indeed. These are what cvxpy refers to DCP: Disciplined Convex Programming. As the error suggests, your objective is not DCP.
To be more precise, the problem is in the objective (A#(v-v0)).T#(A#(v-v0)): cvxpy don't know it is indeed convex (from program point of view, it's just few multiplications).
To ensure your problem is DCP, it's best to use cvxpy atomic functions.
Essentially you are modeling x^T * x (if x=A#(v-v0)), which is the squares of norm 2 of a vector. cp.norm2 is the way to ensure cvxpy will know the problem is convex.
change the line of the objective function to:
objective = cp.Minimize(cp.norm2(A # (v - v0)) ** 2 + lam * cp.abs(v).T # iota)
and it works.
(Also take a look at cvxpy example of lasso regression)

Boundary condition for np.linalg.solve?

I have the following problem:
I have a system of 9 linear coupled equations, with 9 variables I want to solve for. I wrote it in Matrixform, so I can solve it like
A = 9x9 array (matrix)
b = 9x1 array (vector)
x = np.linalg.solve(A, b)
Now my problem is, that I need a boundary condition that three of the elements of the solution should be one, x_00 + x_44 + x_88 = 1.
How do I implement that?
** (For people who know physics, basically I am solving for the steady state solution of a density matrix. And there is a reason why I solve it semi-analytically :) )
** I get a solution already now, but it is different from the solution I get in wolfram mathematica, where I can implement the boundary condition.
Thanks a lot for your help!

optimize with non-linear constraint in python - division by sum of variables

I have an optimization problem. all of my other constraints are linear but I have a constraint that is like this:
in this equation, s and r and k are constants that I have the values and a and s are unknown parameters.
actually, the objective function is:
and it has some other linear constraints.
I'm searching for a python package that can solve this problem and can make that constraint that I mentioned above as a parameter for the optimization problem.
I first searched for linear programming solutions but when I tried to make that constraint in pulp I got this error:
TypeError: Non-constant expressions cannot be multiplied
This is possible with PySCIPOpt:
from pyscipopt import Model
model = Model()
x = model.addVar("x")
y = model.addVar("y")
z = model.addVar("z")
model.setObjective(c)
model.addCons(x / (y*z) >= 0)
model.optimize()
You can formulate any polynomial expression in this way.

Categories