SciPy Linprog() Optimization - python

I am trying to solve the following optimization problem:
Objective function is : min 1𝑥1+2𝑥2
Constraints: 𝑥1+4∗𝑥2>=50 and 𝑥1>=0,𝑥2>=0
This is a linear example. So we use linprog() function.
The answer needs to be an integer, how do I set up constraints so the answer is not a decimal?
import numpy as np
from scipy import optimize
c = [1, 2]
A = [[-1,-4]]
b = -50
x_bounds = (0, None)
y_bounds = (0, None)
result = optimize.linprog(c, A_ub = A, b_ub = b, bounds=[x_bounds, y_bounds] )
print(result)

It is not possible to make integer programming with scipy.
This problem has already discussed.
Please check this post :post

Related

SymPy lambdify gives wrong result, while *.subs gives the accruate one

Sorry for bothering you with this. I have a serious issue and now im on clock to solve it, so here is my question.
I have an issue where I lambdify a quantity, but the result of the quantity differs from the ".subs" result, and sometimes it's way off, or it's a NaN, where in reality there is a real number (found by subs)
Here, I have a small MWE where you can see the issue! Thanks in advance for ur time
import sympy as sy
import numpy as np
##STACK
#some quantities needed before u see the problem
r = sy.Symbol('r', real=True)
th = sy.Symbol('th', real=True)
e_c = 1e51
lf0 = 100
A = 1.6726e-24
#here are some quantities I define to go the problem
lfac = lf0+2
rd = 4*3.14/4/sy.pi/A/lfac**2
xi = r/rd #rescaled r
#now to the problem:
#QUANTITY
lfxi = xi**(-3)*(lfac+1)/2*(sy.sqrt( 1 + 4*lfac/(lfac+1)*xi**(3) + (2*xi**(3)/(lfac+1))**2) -1)
#RESULT WITH SUBS
print(lfxi.subs({th:1.00,r:1.00}).evalf())
#RESULT WITH LAMBDIFY
lfxi_l = sy.lambdify((r,th),lfxi)
lfxi_l(0.01,1.00)
##gives 0
The issue is that your mpmath precision needs to be set higher!
By default mpmath uses prec=53 and dps=15, but your expression requires a much higher resolution than this for it
# print(lfxi)
3.0256512324559e+62*(sqrt(1.09235114769539e-125*pi**6*r**6 + 6.74235013645028e-61*pi**3*r**3 + 1) - 1)/(pi**3*r**3)
...
from mpmath import mp
lfxi_l = sy.lambdify((r,th),lfxi, modules=["mpmath"])
mp.dps = 125
print(lfxi_l(1.00,1.00))
# 101.999... result
Changing a couple of the constants to "modest" values:
In [89]: e_c=1; A=1
The different methods produce essentially the same thing:
In [91]: lfxi.subs({th:1.00,r:1.00}).evalf()
Out[91]: 1.00000000461176
In [92]: lfxi_l = sy.lambdify((r,th),lfxi)
In [93]: lfxi_l(1.0,1.00)
Out[93]: 1.000000004611762
In [94]: lfxi_m = sy.lambdify((r,th),lfxi, modules=["mpmath"])
In [95]: lfxi_m(1.0,1.00)
Out[95]: mpf('1.0000000046117619')

Quadratic Programming in Python using Numpy?

I am in the process of translating some MATLAB code into Python. There is one line that is giving me a bit of trouble:
[q,f_dummy,exitflag, output] = quadprog(H,f,-A,zeros(p*N,1),E,qm,[],[],q0,options);
I looked up the documentation in MATLAB to find that the quadprog function is used for optimization (particularly minimization).
I attempted to find a similar function in Python (using numpy) and there does not seem to be any.
Is there a better way to translate this line of code into Python? Or are there other packages that can be used? Do I need to make a new function that accomplishes the same task?
Thanks for your time and help!
There is a library called CVXOPT that has quadratic programming in it.
def quadprog_solve_qp(P, q, G=None, h=None, A=None, b=None):
qp_G = .5 * (P + P.T) # make sure P is symmetric
qp_a = -q
if A is not None:
qp_C = -numpy.vstack([A, G]).T
qp_b = -numpy.hstack([b, h])
meq = A.shape[0]
else: # no equality constraint
qp_C = -G.T
qp_b = -h
meq = 0
return quadprog.solve_qp(qp_G, qp_a, qp_C, qp_b, meq)[0]
I will start by mentioning that quadratic programming problems are a subset of convex optimization problems which are a subset of optimization problems.
There are multiple python packages which solve quadratic programming problems, notably
cvxopt -- which solves all kinds of convex optimization problems (including quadratic programming problems). This is a python version of the previous cvx MATLAB package.
quadprog -- this is exclusively for quadratic programming problems but doesn't seem to have much documentation.
scipy.optimize.minimize -- this is a very general minimizer which can solve quadratic programming problems, as well as other optimization problems (convex and non-convex).
You might also benefit from looking at the answers to this stackoverflow post which has more details and references.
Note: The code snippet in user1911226' answer appears to come from this blog post:
https://scaron.info/blog/quadratic-programming-in-python.html
which compares some of these quadratic programming packages. I can't comment on their answer, but they claim to be mentioning the cvxopt solution, but the code is actually for the quadprog solution.
OSQP is a specialized free QP solver based on ADMM. I have adapted the OSQP documentation demo and the OSQP call in the qpsolvers repository for your problem.
Note that matrices H and G are supposed to be sparse in CSC format. Here is the script
import numpy as np
import scipy.sparse as spa
import osqp
def quadprog(P, q, G=None, h=None, A=None, b=None,
initvals=None, verbose=True):
l = -np.inf * np.ones(len(h))
if A is not None:
qp_A = spa.vstack([G, A]).tocsc()
qp_l = np.hstack([l, b])
qp_u = np.hstack([h, b])
else: # no equality constraint
qp_A = G
qp_l = l
qp_u = h
model = osqp.OSQP()
model.setup(P=P, q=q,
A=qp_A, l=qp_l, u=qp_u, verbose=verbose)
if initvals is not None:
model.warm_start(x=initvals)
results = model.solve()
return results.x, results.info.status
# Generate problem data
n = 2 # Variables
H = spa.csc_matrix([[4, 1], [1, 2]])
f = np.array([1, 1])
G = spa.csc_matrix([[1, 0], [0, 1]])
h = np.array([0.7, 0.7])
A = spa.csc_matrix([[1, 1]])
b = np.array([1.])
# Initial point
q0 = np.ones(n)
x, status = quadprog(H, f, G, h, A, b, initvals=q0, verbose=True)
You could use the solve_qp function from qpsolvers. It can be installed, along with a starter kit of open source solvers, by pip install qpsolvers[open_source_solvers]. Then you can replace your line with:
from qpsolvers import solve_qp
solver = "proxqp" # or "osqp", "quadprog", "cvxopt", ...
x = solve_qp(H, f, G, h, A, b, initvals=q_0, solver=solver, **options)
There are many solvers available in Python, each coming with their pros and cons. Make sure you try different values for the solver keyword argument to find the one that fits your problem best.
Here is a standalone example based on your question and the other comments:
import numpy as np
from qpsolvers import solve_qp
H = np.array([[4.0, 1.0], [1.0, 2.0]])
f = np.array([1.0, 1])
G = np.array([[1.0, 0.0], [0.0, 1.0]])
h = np.array([0.7, 0.7])
A = np.array([[1.0, 1.0]])
b = np.array([1.0])
q_0 = np.array([1.0, 1.0])
solver = "cvxopt" # or "osqp", "proxqp", "quadprog", ...
options = {"verbose": True}
x = solve_qp(H, f, G, h, A, b, initvals=q_0, solver=solver, **options)

Efficiently get shadow prices in scipy linprog

I have a huge linprog problem of almost 1k variables and restrictions.
I can calculate the solution with scipy.optimize.linprog(method='simplex') but I need shadow prices (or opportunity costs) of ~100 inequalities.
I'm able to calculate them by adding 1 to the right side of the inequality and then solving that problem. Then I get the shadow price substracting the objective functions values for both solutions: shadow_price_i = f_max_original - f_max_i. Then repeat 100 times. This method works but it's painfully slow (1h).
Is there something I can do to obtain shadow prices quicker? Maybe some trick or functionality I'm missing...
Solve the dual problem and that will give you all the shadow prices with just one more call to linprog. Here is an example for a standard LP problem:
import scipy.optimize as opt
import numpy as np
c = np.array([400, 200, 250]) # negative of objective function
b = np.array([1000, 300, 625]) # constraint bounds
A = np.array([[3, 1, 1.5],
[0.8, 0.2, 0.3],
[1, 1, 1]]) # constraints
x1_bnds = (0, None) # bounds on x1
x2_bnds = (0, None) # bounds on x2
x3_bnds = (0, None) # bounds on x3
result = opt.linprog(-c, A_ub=A, b_ub=b, bounds=(x1_bnds, x2_bnds, x3_bnds))
dual_c = b
dual_b = -1.0 * c
dual_A = -1.0 * np.transpose(A)
result = opt.linprog(dual_c, A_ub=dual_A, b_ub=dual_b,
bounds=(x1_bnds, x2_bnds, x3_bnds))

Python SciPy linprog optimization fails with status 3

Trying to minimize a simple linear function with linprog. The coefficients are the elements of arr2 multiplied by -1. There are only inequality constraints for each variable, such as -1 <= x1 <= 1, -2 <= x2 <= 2 and so on.
If a choose not to specify bounds in linprog:
from scipy.optimize import linprog
import numpy as np
import pandas as pd
numdim = 28
arr1 = np.ones(numdim)
arr1 = - arr1
arr2 = np.array([
19.53,
128.97,
3538,
931.8,
0.1825,
150.88,
10315,
0.8109,
3.9475,
3022,
31.77,
10323,
110.93,
220,
2219.5,
119.2,
703.6,
616,
338,
84.67,
151.13,
111.28,
29.515,
29.67,
158800,
167.15,
0.06802,
1179
])
constr_a = []
for i in range(numdim):
constr_default = np.zeros(numdim)
constr_default[i] = 1
constr_a.append(constr_default)
for i in range(numdim):
constr_default = np.zeros(numdim)
constr_default[i] = -1
constr_a.append(constr_default)
constr_a = np.asarray(constr_a)
constr_b = np.arange(1, 2*numdim + 1, 1)
constr_b[numdim:] = constr_b[:numdim]
print linprog(np.transpose(arr1 * arr2), constr_a, constr_b, bounds=(None, None))
I get the following result:
fun: -4327476.2887400016
message: 'Optimization failed. The problem appears to be unbounded.'
status: 3
I've tried changing the last row to:
print linprog(np.transpose(arr1 * arr2), constr_a, constr_b, bounds=(-1000, 1000))
The numbers specified as bounds are random. The output is:
fun: -4327476.2887400296
message: 'Optimization terminated successfully.'
status: 0
which gives us a slightly different result and the desired status.
My question is, do I misuse the library and in which way? Which answer is correct? This code was expected to work without specifying the 'bounds' parameter. I cannot use this parameter because these simple constraints are unique for each variable.
I use python 2.7 and scipy 0.17.1. Big thanks in advance.
Upd
constr_a should be a matrix according to the documentation (https://docs.scipy.org/doc/scipy/reference/optimize.linprog-simplex.html) and actually is in the code. To be sure the syntax is correct, we can cut the number of dimensions to 2:
from scipy.optimize import linprog
import numpy as np
import pandas as pd
numdim = 2
arr1 = np.ones(numdim)
arr1 = - arr1
arr2 = np.array([
19.53,
128.97
])
constr_a = []
for i in range(numdim):
constr_default = np.zeros(numdim)
constr_default[i] = 1
constr_a.append(constr_default)
for i in range(numdim):
constr_default = np.zeros(numdim)
constr_default[i] = -1
constr_a.append(constr_default)
constr_a = np.asarray(constr_a)
constr_b = np.arange(1, 2*numdim + 1, 1)
constr_b[numdim:] = constr_b[:numdim]
print constr_a
print constr_b
print linprog(np.transpose(arr1 * arr2), constr_a, constr_b, bounds=(None, None))
and this will work.
the constr_a list is not properly formed. It is an array of array's instead of being an array of scalar. This might be leading to a improper lower bound causing the optimization to fail.
Perhaps
constr_a.append(constr_default)
should be
constr_a.append(constr_default[i])
inspect both the bound arrays to make sure they have proper form and values.

How to solve an LCP (linear complementarity problem) in python?

Is there a good library to numericly solve an LCP in python ?
Edit: I need a working python code example because most libraries seem to only solve quadratic problems and i have problems converting an LCP into a QP.
For quadratic programming with Python, I use the qp-solver from cvxopt (source). Using this, it is straightforward to translate the LCP problem into a QP problem (see Wikipedia). Example:
from cvxopt import matrix, spmatrix
from cvxopt.blas import gemv
from cvxopt.solvers import qp
def append_matrix_at_bottom(A, B):
l = []
for x in xrange(A.size[1]):
for i in xrange(A.size[0]):
l.append(A[i+x*A.size[0]])
for i in xrange(B.size[0]):
l.append(B[i+x*B.size[0]])
return matrix(l,(A.size[0]+B.size[0],A.size[1]))
M = matrix([[ 4.0, 6, -4, 1.0 ],
[ 6, 1, 1.0 2.0 ],
[-4, 1.0, 2.5, -2.0 ],
[ 1.0, 2.0, -2.0, 1.0 ]])
q = matrix([12, -10, -7.0, 3])
I = spmatrix(1.0, range(M.size[0]), range(M.size[1]))
G = append_matrix_at_bottom(-M, -I) # inequality constraint G z <= h
h = matrix([x for x in q] + [0.0 for _x in range(M.size[0])])
sol = qp(2.0 * M, q, G, h) # find z, w, so that w = M z + q
if sol['status'] == 'optimal':
z = sol['x']
w = matrix(q)
gemv(M, z, w, alpha=1.0, beta=1.0) # w = M z + q
print(z)
print(w)
else:
print('failed')
Please note:
the code is totally untested, please check carefully;
there surely are better solution techniques than transforming LCP into QP.
Take a look at the scikit OpenOpt. It has an example of doing quadratic programming and I believe that it goes beyond SciPy's optimization routines. NumPy is required to use OpenOpt. I believe that the wikipedia page that you pointed us to for LCP describes how to solve a LCP by QP.
The best algorithm for solving MCPs (mixed nonlinear complementarity problems, more general than LCP) is the PATH solver: http://pages.cs.wisc.edu/~ferris/path.html
The PATH solver is available in matlab and GAMS. Both are coming with a python API. I have chosen to use GAMS because there is a free version. So here is a step by step solution to solve an LCP with the python API of GAMS. I used python 3.6:
Download and install GAMS: https://www.gams.com/download/
Install the API to python like here: https://www.gams.com/latest/docs/API_PY_TUTORIAL.html
I used conda, changed the directory to were the apifiles of python 3.6 were and entered
python setup.py install
Create a .gms-file (GAMS file) lcp_for_py.gms containing:
sets i;
alias(i,j);
parameters m(i,i),b(i);
$gdxin lcp_input
$load i m b
$gdxin
positive variables z(i);
equations Linear(i);
Linear(i).. sum(j,m(i,j)*z(j)) + b(i) =g= 0;
model lcp linear complementarity problem/Linear.z/;
options mcp = path;
solve lcp using mcp;
display z.L;
Your python code is like this:
import gams
#Set working directory, GamsWorkspace and the Database
worDir = "<THE PATH WHERE YOU STORED YOUR .GMS-FILE>" #"C:\documents\gams\"
ws=gams.GamsWorkspace(working_directory=worDir)
db=ws.add_database(database_name="lcp_input")
#Set the matrix and the vector of the LCP as lists
matrix = [[1,1],[2,1]]
vector = [0,-2]
#Create the Gams Set
index = []
for k in range(0,len(matrix)):
index.append("i"+str(k+1))
i = db.add_set("i",1,"number of decision variables")
for k in index:
i.add_record(k)
#Create a Gams Parameter named m and add records
m = db.add_parameter_dc("m", [i,i], "matrix of the lcp")
for k in range(0,len(matrix)):
for l in range(0,len(matrix[0])):
m.add_record([index[k],index[l]]).value = matrix[k][l]
#Create a Gams Parameter named b and add records
b = db.add_parameter_dc("b",[i],"bias of quadratics")
for k in range(0, len(vector)):
b.add_record(index[k]).value = vector[k]
#run the GamsJob using the Gams File and the database
lcp = ws.add_job_from_file("lcp_for_py.gms")
lcp.run(databases = db)
#Save the solution as a list an print it
z = []
for rec in lcp.out_db["z"]:
z.append(rec.level)
print(z)
OpenOpt has a free LCP solver written in Python + NumPy see http://openopt.org/LCP

Categories