I am in the process of translating some MATLAB code into Python. There is one line that is giving me a bit of trouble:
[q,f_dummy,exitflag, output] = quadprog(H,f,-A,zeros(p*N,1),E,qm,[],[],q0,options);
I looked up the documentation in MATLAB to find that the quadprog function is used for optimization (particularly minimization).
I attempted to find a similar function in Python (using numpy) and there does not seem to be any.
Is there a better way to translate this line of code into Python? Or are there other packages that can be used? Do I need to make a new function that accomplishes the same task?
Thanks for your time and help!
There is a library called CVXOPT that has quadratic programming in it.
def quadprog_solve_qp(P, q, G=None, h=None, A=None, b=None):
qp_G = .5 * (P + P.T) # make sure P is symmetric
qp_a = -q
if A is not None:
qp_C = -numpy.vstack([A, G]).T
qp_b = -numpy.hstack([b, h])
meq = A.shape[0]
else: # no equality constraint
qp_C = -G.T
qp_b = -h
meq = 0
return quadprog.solve_qp(qp_G, qp_a, qp_C, qp_b, meq)[0]
I will start by mentioning that quadratic programming problems are a subset of convex optimization problems which are a subset of optimization problems.
There are multiple python packages which solve quadratic programming problems, notably
cvxopt -- which solves all kinds of convex optimization problems (including quadratic programming problems). This is a python version of the previous cvx MATLAB package.
quadprog -- this is exclusively for quadratic programming problems but doesn't seem to have much documentation.
scipy.optimize.minimize -- this is a very general minimizer which can solve quadratic programming problems, as well as other optimization problems (convex and non-convex).
You might also benefit from looking at the answers to this stackoverflow post which has more details and references.
Note: The code snippet in user1911226' answer appears to come from this blog post:
https://scaron.info/blog/quadratic-programming-in-python.html
which compares some of these quadratic programming packages. I can't comment on their answer, but they claim to be mentioning the cvxopt solution, but the code is actually for the quadprog solution.
OSQP is a specialized free QP solver based on ADMM. I have adapted the OSQP documentation demo and the OSQP call in the qpsolvers repository for your problem.
Note that matrices H and G are supposed to be sparse in CSC format. Here is the script
import numpy as np
import scipy.sparse as spa
import osqp
def quadprog(P, q, G=None, h=None, A=None, b=None,
initvals=None, verbose=True):
l = -np.inf * np.ones(len(h))
if A is not None:
qp_A = spa.vstack([G, A]).tocsc()
qp_l = np.hstack([l, b])
qp_u = np.hstack([h, b])
else: # no equality constraint
qp_A = G
qp_l = l
qp_u = h
model = osqp.OSQP()
model.setup(P=P, q=q,
A=qp_A, l=qp_l, u=qp_u, verbose=verbose)
if initvals is not None:
model.warm_start(x=initvals)
results = model.solve()
return results.x, results.info.status
# Generate problem data
n = 2 # Variables
H = spa.csc_matrix([[4, 1], [1, 2]])
f = np.array([1, 1])
G = spa.csc_matrix([[1, 0], [0, 1]])
h = np.array([0.7, 0.7])
A = spa.csc_matrix([[1, 1]])
b = np.array([1.])
# Initial point
q0 = np.ones(n)
x, status = quadprog(H, f, G, h, A, b, initvals=q0, verbose=True)
You could use the solve_qp function from qpsolvers. It can be installed, along with a starter kit of open source solvers, by pip install qpsolvers[open_source_solvers]. Then you can replace your line with:
from qpsolvers import solve_qp
solver = "proxqp" # or "osqp", "quadprog", "cvxopt", ...
x = solve_qp(H, f, G, h, A, b, initvals=q_0, solver=solver, **options)
There are many solvers available in Python, each coming with their pros and cons. Make sure you try different values for the solver keyword argument to find the one that fits your problem best.
Here is a standalone example based on your question and the other comments:
import numpy as np
from qpsolvers import solve_qp
H = np.array([[4.0, 1.0], [1.0, 2.0]])
f = np.array([1.0, 1])
G = np.array([[1.0, 0.0], [0.0, 1.0]])
h = np.array([0.7, 0.7])
A = np.array([[1.0, 1.0]])
b = np.array([1.0])
q_0 = np.array([1.0, 1.0])
solver = "cvxopt" # or "osqp", "proxqp", "quadprog", ...
options = {"verbose": True}
x = solve_qp(H, f, G, h, A, b, initvals=q_0, solver=solver, **options)
Related
I have a system of ODEs which assume the form
In essence, I have the solutions B_1(t) and B_2(t) for t=5 and I am interested in finding the unknown parameters rho_1 and rho_2. The approach I took entailed: 1) define the function corresponding to the system above; 2) integrate using solve_ivp and deduct the result from the true values of B_1(t) and B_2(t); 3) finally use fsolve to find the appropriate values of rho_1 and rho_2, such that the difference between the true parameters B_1(t) and B_2(t) and the ones obtained using the tuned parameters of rho_1 and rho_2 is a zero vector. The code I have implemented for this purpose is the following:
t_eval = np.arange(0, 5)
def fun(t, s, rho_1, rho_2):
return np.dot(np.array([0.775416, 0,0, 0.308968]).reshape(2,2), s) + np.array([rho_1, rho_2]).reshape(2,1)
def fun2(t, rho_1, rho_2):
res = solve_ivp(fun, [0, 5], y0 = [0, 0], t_eval=t_eval, args = (rho_1, rho_2), vectorized = True)
sol = res.y[:,4]-np.array([0.01306365, 0.00589119])
return sol
root = fsolve(fun2, [0, 0])
However, I am not sure whether fsolve is not appropriate for this purpose tor there is something wrong with my code, as I get the following error:
fun2() missing 2 required positional arguments: 'rho_1' and 'rho_2'
I have finally managed to wrap some Fortran into Python using numpy's f2py.
I am going to need to write a small library of subroutines that I can then use for a larger project, but I am already having trouble with the example subroutine I just compiled.
This is a simple algorithm to solve linear systems of equations in Fortran 90:
SUBROUTINE solve_lin_sys(A, c, x, n)
! =====================================================
! Uses gauss elimination and backwards substitution
! to reduce a linear system and solve it.
! Problem (0-div) can arise if there are 0s on main diag.
! =====================================================
IMPLICIT NONE
INTEGER:: i, j, k
REAL*8::fakt, summ
INTEGER, INTENT(in):: n
REAL*8, INTENT(inout):: A(n,n)
REAL*8, INTENT(inout):: c(n)
REAL*8, INTENT(out):: x(n)
DO i = 1, n-1 ! pick the var to eliminate
DO j = i+1, n ! pick the row where to eliminate
fakt = A(j,i) / A(i,i) ! elimination factor
DO k = 1, n ! eliminate
A(j,k) = A(j,k) - A(i,k)*fakt
END DO
c(j)=c(j)-c(i)*fakt ! iterate on known terms
END DO
END DO
! Actual solving:
x(n) = c(n) / A(n,n) ! last variable being solved
DO i = n-1, 1, -1
summ = 0.d0
DO j = i+1, n
summ = summ + A(i,j)*x(j)
END DO
x(i) = (c(i) - summ) / A(i,i)
END DO
END SUBROUTINE solve_lin_sys
I successfully made it into a Python module and I can import it, but I can't figure out what kind of arguments I should pass it into python.
This is what I've tried:
import numpy as np
import fortran_operations as ops
sys = [[3, -0.1, -0.2],
[0.1, 7, -0.3],
[0.3, -0.2, 10]]
known = [7.85, -19.3, 71.4]
A = np.array(sys)
c = np.array(known)
x = ops.solve_lin_sys(A, c, 3)
I tried using both the normal list and the numpy array as an argument for the Fortran function, but in all cases I get the error:
x = ops.solve_lin_sys(A, c, 3) ------ ValueError: failed in converting 1st
argument `a' of fortran_operations.solve_lin_sys to C/Fortran array
I thought you were supposed to simply use numpy arrays. Printing solve_lin_sys.doc also states that I should use rank 2 arrays for A and rank 1 for C. Isn't that already the shape of the arrays when I defined them like that?
Sorry if this has a stupid solution, I'm a beginner with all of this.
Thank you in advance
Arrays of dimension 2 and higher must be continuous with 'Fortran' order of elements.
import ops
sys = [[3, -0.1, -0.2],
[0.1, 7, -0.3],
[0.3, -0.2, 10]]
known = [7.85, -19.3, 71.4]
A = np.array(sys, dtype='d', order='F')
c = np.array(known, dtype='d')
x = ops.solve_lin_sys(A, c, 3)
I am trying to solve the following optimization problem:
Objective function is : min 1𝑥1+2𝑥2
Constraints: 𝑥1+4∗𝑥2>=50 and 𝑥1>=0,𝑥2>=0
This is a linear example. So we use linprog() function.
The answer needs to be an integer, how do I set up constraints so the answer is not a decimal?
import numpy as np
from scipy import optimize
c = [1, 2]
A = [[-1,-4]]
b = -50
x_bounds = (0, None)
y_bounds = (0, None)
result = optimize.linprog(c, A_ub = A, b_ub = b, bounds=[x_bounds, y_bounds] )
print(result)
It is not possible to make integer programming with scipy.
This problem has already discussed.
Please check this post :post
As per my language knowledge my code is written correct. But It is not giving me correct solution (plot). When I had solved same system of ODE's in mathematica, I have correct solution and both solutions are totally different. I am writing a research project so I need a proper code in python. could you please let me know the mistake of mine code.
python code solution
Mathematica solution
import numpy as np
import matplotlib.pyplot as plt
import scipy.integrate as si
##Three system
def func(state, T):
H = state[0]
P = state[1]
R = state[2]
Hd = -(16./3.)*np.pi*P
Pd = -4.*H*P
Rd = H*R
return Hd,Pd,Rd
T = np.linspace(0.1,0.9,50)
state0 = [1,0.0001, 0.1]
s = si.odeint(func, state0, T)
h = np.transpose(s)
plt.plot(T,h[0])
plt.show()
Mathematica code
Clear[H,\[Rho],a]
Eq1=(H'[t] == -16 \[Pi] \[Rho][t]/3)
Eq2= (\[Rho]'[t] == -4 H[t] \[Rho][t])
Eq3 = (a'[t] == H[t] a[t])
sol=NDSolve[{Eq1,Eq2, Eq3,
H[0.1]==0.1, \[Rho][0.1]==0.1, a[0.1]==0.1},
{H[t],\[Rho][t],a[t]}, {t,0.1, 0.9}]
Plot[Evaluate[{H[t]}/.sol],{t,0.1,0.9}]
Both codes are correct, I just turned off my laptop and on it again, and it gives me the correct result (as mathematica)
Is there a good library to numericly solve an LCP in python ?
Edit: I need a working python code example because most libraries seem to only solve quadratic problems and i have problems converting an LCP into a QP.
For quadratic programming with Python, I use the qp-solver from cvxopt (source). Using this, it is straightforward to translate the LCP problem into a QP problem (see Wikipedia). Example:
from cvxopt import matrix, spmatrix
from cvxopt.blas import gemv
from cvxopt.solvers import qp
def append_matrix_at_bottom(A, B):
l = []
for x in xrange(A.size[1]):
for i in xrange(A.size[0]):
l.append(A[i+x*A.size[0]])
for i in xrange(B.size[0]):
l.append(B[i+x*B.size[0]])
return matrix(l,(A.size[0]+B.size[0],A.size[1]))
M = matrix([[ 4.0, 6, -4, 1.0 ],
[ 6, 1, 1.0 2.0 ],
[-4, 1.0, 2.5, -2.0 ],
[ 1.0, 2.0, -2.0, 1.0 ]])
q = matrix([12, -10, -7.0, 3])
I = spmatrix(1.0, range(M.size[0]), range(M.size[1]))
G = append_matrix_at_bottom(-M, -I) # inequality constraint G z <= h
h = matrix([x for x in q] + [0.0 for _x in range(M.size[0])])
sol = qp(2.0 * M, q, G, h) # find z, w, so that w = M z + q
if sol['status'] == 'optimal':
z = sol['x']
w = matrix(q)
gemv(M, z, w, alpha=1.0, beta=1.0) # w = M z + q
print(z)
print(w)
else:
print('failed')
Please note:
the code is totally untested, please check carefully;
there surely are better solution techniques than transforming LCP into QP.
Take a look at the scikit OpenOpt. It has an example of doing quadratic programming and I believe that it goes beyond SciPy's optimization routines. NumPy is required to use OpenOpt. I believe that the wikipedia page that you pointed us to for LCP describes how to solve a LCP by QP.
The best algorithm for solving MCPs (mixed nonlinear complementarity problems, more general than LCP) is the PATH solver: http://pages.cs.wisc.edu/~ferris/path.html
The PATH solver is available in matlab and GAMS. Both are coming with a python API. I have chosen to use GAMS because there is a free version. So here is a step by step solution to solve an LCP with the python API of GAMS. I used python 3.6:
Download and install GAMS: https://www.gams.com/download/
Install the API to python like here: https://www.gams.com/latest/docs/API_PY_TUTORIAL.html
I used conda, changed the directory to were the apifiles of python 3.6 were and entered
python setup.py install
Create a .gms-file (GAMS file) lcp_for_py.gms containing:
sets i;
alias(i,j);
parameters m(i,i),b(i);
$gdxin lcp_input
$load i m b
$gdxin
positive variables z(i);
equations Linear(i);
Linear(i).. sum(j,m(i,j)*z(j)) + b(i) =g= 0;
model lcp linear complementarity problem/Linear.z/;
options mcp = path;
solve lcp using mcp;
display z.L;
Your python code is like this:
import gams
#Set working directory, GamsWorkspace and the Database
worDir = "<THE PATH WHERE YOU STORED YOUR .GMS-FILE>" #"C:\documents\gams\"
ws=gams.GamsWorkspace(working_directory=worDir)
db=ws.add_database(database_name="lcp_input")
#Set the matrix and the vector of the LCP as lists
matrix = [[1,1],[2,1]]
vector = [0,-2]
#Create the Gams Set
index = []
for k in range(0,len(matrix)):
index.append("i"+str(k+1))
i = db.add_set("i",1,"number of decision variables")
for k in index:
i.add_record(k)
#Create a Gams Parameter named m and add records
m = db.add_parameter_dc("m", [i,i], "matrix of the lcp")
for k in range(0,len(matrix)):
for l in range(0,len(matrix[0])):
m.add_record([index[k],index[l]]).value = matrix[k][l]
#Create a Gams Parameter named b and add records
b = db.add_parameter_dc("b",[i],"bias of quadratics")
for k in range(0, len(vector)):
b.add_record(index[k]).value = vector[k]
#run the GamsJob using the Gams File and the database
lcp = ws.add_job_from_file("lcp_for_py.gms")
lcp.run(databases = db)
#Save the solution as a list an print it
z = []
for rec in lcp.out_db["z"]:
z.append(rec.level)
print(z)
OpenOpt has a free LCP solver written in Python + NumPy see http://openopt.org/LCP