Is it normal in scipy.optimise? - python

I want to optimise my portfolio using Markowitz theory (risk minimization by the Markowitz method for a given income = 15%) and Scipy.minimize
I have risk function
def objective(x):
x1=x[0];x2=x[1];x3=x[2]; x4=x[3]
return 1547.87020*x1**2 + 125.26258*x1*x2 + 1194.3433*x1*x3 + 63.6533*x1*x4 \
+ 27.3176649*x2**2 + 163.28848*x2*x3 + 4.829816*x2*x4 \
+ 392.11819*x3**2 + 56.50518*x3*x4 \
+ 34.484063*x4**2
Sum of parts of stocks(in %) = 1
def constraint1(x):
return (x[0]+x[1]+x[2]+x[3]-1.0)
Income function with restriction
def constraint2(x):
return (-1.37458*x[0] + 0.92042*x[1] + 5.06189*x[2] + 0.35974*x[3] - 15.0)
And I test it using:
x0=[0,1,1,0] #Initial value
b=(0.0,1.0)
bnds=(b,b,b,b)
con1={'type':'ineq','fun':constraint1}
con2={'type':'eq','fun':constraint2}
cons=[con1,con2]
sol=minimize(objective,x0,method='SLSQP',\
bounds=bnds,constraints=cons)
And my result is:
fun: 678.5433939
jac: array([1383.25920868, 222.75363159, 1004.03005219, 130.30312347])
message: 'Positive directional derivative for linesearch'
nfev: 216
nit: 20
njev: 16
status: 8
success: False
x: array([0., 1., 1., 1.])
But how? Sum of parts of portfolio cant be more than 1(now parts of stock 2=stock3=stock4=100%). Its constraint1. Where is problem?

The output says "success: False"
So it is telling you that it failed to find a solution to the problem.
Also, why did you put
con1={'type':'ineq','fun':constraint1}
Don't you want
con1={'type':'eq','fun':constraint1}
I got success using method='BFGS'

Your code is returning values that do not respect your constraint due to false definition of the first constraint (a-b >= 0 => a>b) so in your case a=1(the order in an inequality is important). On the other hand your x0 must also respect your constraints and sum([0,1,1,0]) = 2 > 1.
I slightly improved your code and fixed the aforementioned issues, but I still think that you need to review your second constraint:
import numpy as np
from scipy.optimize import minimize
def objective(x):
x1, x2, x3, x4 = x[0], x[1], x[2], x[3]
coefficients = np.array([1547.87020, 125.26258, 1194.3433, 63.6533, 27.3176649, 163.28848, 4.829816, 392.11819, 56.50518, 34.484063])
xs = np.array([ x1**2, x1*x2, x1*x3, x1*x4, x2**2, x2*x3, x2*x4, x3**2, x3*x4, x4**2])
return np.dot(xs, coefficients)
const1 = lambda x: 1 - sum(x)
const2 = lambda x: np.dot(np.array([-1.37458, 0.92042, 5.06189, 0.35974]), x) - 15.0
x0 = [0, 0, 0, 0] #Initial value
b = (0.0, 1.0)
bnds = (b, b, b, b)
cons = [{'type':'ineq','fun':const1}, {'type':'eq', 'fun':const2}]
# minimize
sol = minimize(objective,
x0,
method = 'SLSQP',
bounds = bnds,
constraints = cons)
print(sol)
output:
fun: 392.1181900000138
jac: array([1194.34332275, 163.28847885, 784.23638535, 56.50518036])
message: 'Positive directional derivative for linesearch'
nfev: 92
nit: 11
njev: 7
status: 8
success: False
x: array([0.00000000e+00, 5.56638069e-14, 1.00000000e+00, 8.29371293e-14])

Related

Lagrange multipliers numerical approach

I have the following function that I'm trying to optimize that is subject to a constraint:
def damage(a, e, cr, cd):
return 100*(1+a)*(1+e)*(1+cr*cd)
def constraint(a, e, cr, cd):
return (100/0.466)*(a+e)+(100/0.622)*(2*cr+cd)
When solving for the Lagrangian by hand I get this output:
import numpy as np
import sympy as smp
a, e, c, d, l = smp.symbols('a e c d l')
eq1 = smp.Eq(1/(1+a), (100/46.6)*l)
eq2 = smp.Eq(1/(1+e), (100/46.6)*l)
eq3 = smp.Eq(d/(1+c*d), (100/62.2)*2*l)
eq4 = smp.Eq(c/(1+c*d), (100/62.2)*l)
eq5 = smp.Eq((100/46.6)*(a+e)+(100/62.2)*(2*c+d) - 300, 0)
solution = np.array(smp.solve([eq1, eq2, eq3, eq4, eq5], [a, e, c, d, l]))
print(solution[0]/100)
print('Constraint', '{:,.0f}'.format(constraint(*(solution/100)[0][:-1])))
print('Max damage', '{:,.0f}'.format(float(round(damage(*(solution/100)[0][:-1])))))
[0.344658405015485 0.344658405015485 0.236481193219279 0.472962386438559 0.000131394038153319]
Constraint 300
Max damage 201
To be able to solve this through a numerical approach, I modified the formulation of the problem by explicitly stating the constraints individually (separating the primary constraint into smaller constraints). I expressly stated the required relationships between the variables and constrained only one of the variables, which then determined the states of all of the other variables.
# We first convert this into a minimization problem.
from scipy import optimize
def damage_min(x):
return -100*(1+x[0])*(1+x[1])*(1+x[2]*x[3])
# next we define the constrains (equal to 0)
def constraints(x):
c1 = x[0] - x[1]
c2 = 2*x[2] - x[3]
c3 = x[0]/x[3] - 0.466/0.622
c4 = x[3] - 0.466
return np.array([c1, c2, c3, c4])
cons = ({'type': 'eq',
'fun' : constraints})
# We solve the minimization problem
x_initial = np.array([34.4658405015485, 34.4658405015485, 23.6481193219279, 47.2962386438559])
solution = optimize.minimize(damage_min, x_initial, constraints=cons)
print(solution.x)
print('Constraint', '{:,.0f}'.format(constraint(*(solution.x))))
print('Max damage', '{:,.0f}'.format(float(round(damage(*(solution.x))))))
[0.3491254 0.3491254 0.233 0.466 ]
Constraint 300
Max damage 202
My question is as follows. How can I recreate the optimal results above by numerically optimizing a single function, e.g., the Lagrangian multiplier? When I try to put both functions into a single function, I get this output.
const = 300
def lagrangian(a, e, cr, cd, lam):
return -damage(a, e, cr, cd) + lam*(round(constraint(a, e, cr, cd)) - const)
def vector_lagrangian(x):
return lagrangian(x[0], x[1], x[2], x[3], x[4])
x_initial = np.array([32.4658405015485, 34.4658405015485, 23.6481193219279, 47.2962386438559, 1])
solution = optimize.minimize(vector_lagrangian, x_initial)
fun: -2.140132414183526e+37
hess_inv: array([[1, 0, 0, 0, 0],
[0, 1, 0, 0, 0],
[0, 0, 1, 0, 0],
[0, 0, 0, 1, 0],
[0, 0, 0, 0, 1]])
jac: array([0., 0., 0., 0., 0.])
message: 'Optimization terminated successfully.'
nfev: 119
nit: 1
njev: 17
status: 0
success: True
x: array([ 6.90178344e+08, 6.51257507e+08, 9.75839219e+08, 4.87919645e+08,
-5.08835272e+06])
'constraint': '680,080,111,963'
The constraint, in this case, isn't being held and it converges on a local minimum. Why is this the case? Is the problem caused by the solver, the specific function that is being optimized, or is there some other reason?
As already mentioned in the comments, your math is wrong since minimizing the Lagrangian doesn't yield a local minimum of the corresponding optimization problem. Assuming f : R^n -> R and g : R^n -> R^m are both differentiable functions and you want to solve the optimization problem
min f(x) s.t. g(x) = 0
then the first-order necessary optimality condition (FOC) is
∇L(x, λ) = ∇f(x) + ∇g(x)^T * λ = 0
g(x) = 0
where L is the Lagrangian, ∇f the objective gradient and ∇g the transposed Jacobian of the function g. Consequently, you need to find a root of the function H(x,λ) = (∇f(x) + λ^T * ∇g(x), g(x))^T to solve the FOC, which can be done by means of scipy.optimize.root:
from scipy.optimize import minimize, root
from scipy.optimize._numdiff import approx_derivative
def damage_min(x):
return -100*(1+x[0])*(1+x[1])*(1+x[2]*x[3])
def constraints(x):
c1 = x[0] - x[1]
c2 = 2*x[2] - x[3]
c3 = x[0]/x[3] - 0.466/0.622
c4 = x[3] - 0.466
return np.array([c1, c2, c3, c4])
def f_grad(x):
return approx_derivative(damage_min, x)
def g_jac(x):
return approx_derivative(constraints, x)
def H(z, f_grad, g, g_jac):
g_evaluated = g(z)
x, λ = np.split(z, (-g_evaluated.size, ))
eq1 = f_grad(x) + g_jac(x).T # λ
eq2 = g_evaluated
return np.array([*eq1, *eq2])
# res.x contains the solution
res = root(lambda z: H(z, f_grad, constraints, g_jac), x0=np.ones(8))
which yields the solution (consisting of x and the lagrangian multipliers λ):
array([ 3.49125402e-01, 3.49125402e-01, 2.33000000e-01, 4.66000000e-01,
-1.49561074e+02, 4.24092469e+01, 1.39390921e+02, 3.08919653e+02])
A few notes:
In general, it's highly recommended to provide exact derivatives instead of approximating them by finite differences by means of approx_derivate.
If you really want to solve a minimization problem, you can solve the FOC by minimizing the euclidean norm of the function H. This is exactly what the root method does under the hood.

scipy.optimize.minimize does not converge in multivariable optimization

I want to find the values of board_trim and lm that will give me the lowest (closest to 0) value for Board_Moments.
For this I use scipy.optimize.minimize, but it does not converge. I really can't figure it out.
with Parameters:
displacement = 70
b = 6.5
deadrise = 20
LCG = 10
Vs_ms = 23.15 #ms
rho = 1025
mu = 1.19e-6
def Board_Moments(params):
board_trim, lm = params
displacement_N = displacement * 9.81 #kN
lp = Lp(Vs_ms, b, lm)
N = displacement_N * cos(d2r(board_trim)) #Drag Forces Perpendicular to the keel
#Taking moments about transom at height of CG
deltaM = (displacement_N * LCG) - (N * lp) #equilibrium condition
return deltaM
where lp:
def Lp(Vs_ms, b, lm):
cv = Cv(Vs_ms, b)
Lambda = Lambda_(lm, b)
Cp = 0.75 - (1 / (5.21 * (cv / Lambda)**2 + 2.39))
lp = Cp * lm
return lp
and
def Cv(Vs_ms, b):
cv = Vs_ms / (9.81 * b)**0.5
return cv
and
def Lambda_(lm, b):
lambda_ = lm / b
return lambda_
the optimization is done with:
board_trim = 2 #initial estimate
lm = 17.754 #initial estimate
x0 = [board_trim, lm]
Deltam = minimize(Board_Moments, x0, method = 'Nelder-Mead')
print(Deltam)
The error I get:
final_simplex: (array([[ 1.36119237e+01, 3.45635965e+23],
[-1.36046725e+01, 3.08439110e+23],
[ 2.07268577e+01, 2.59841956e+23]]), array([-7.64916992e+25,
-6.82618616e+25, -5.53373709e+25]))
fun: -7.649169916342451e+25
message: 'Maximum number of function evaluations has been exceeded.'
nfev: 401
nit: 220
status: 1
success: False
x: array([1.36119237e+01, 3.45635965e+23])
Any help would be much appreciated, thanks
You mention
that will give me the lowest (closest to 0) value for Board_Moments.
But minimize will search the absolute minimum. If you print the intermediate values for deltaM (which you should have done to debug your problem), you'll find they just get smaller and smaller, below zero (so -10, -100, -500 etc. That kind of progression).
To get as close to zero as possible, the solution is simply: return the absolute value of deltaM from Board_Moments:
def Board_Moments(params):
# code as before ...
deltaM = (displacement_N * LCG) - (N * lp) #equilibrium condition
# This print function would have shown the problem immediately
#print(deltaM)
# Use absolute (the built-in `abs` or `np.abs`;
# doesn't really matter for a single value)
# to get close to zero
return np.abs(deltaM)
For this particular case and fix, the result I get is:
final_simplex: (array([[ 2.32386388, 15.3390523 ],
[ 2.32394414, 15.33905343],
[ 2.32390145, 15.33905283]]), array([5.33445927e-08, 7.27723091e-08, 1.09428584e-07]))
fun: 5.334459274308756e-08
message: 'Optimization terminated successfully.'
nfev: 107
nit: 59
status: 0
success: True
x: array([ 2.32386388, 15.3390523 ])
(and if you comment out the print function, you'll see it easily converge towards zero.)

Optimize a function that has many outputs with python scipy.optimize.minimize

I've been searching a little in the web and I did not found any solution to this question, so I decided to formally ask here.
I have a function that has multiple outputs:
f(input1,input2,... ,inputn) = output1,ouput2,output3
My wish is to minimize output1 constraining other outputs, for instance:
output2 >= 0 and output3 <= 7 (*constraints*)
My idea was to have the following program structure:
def function(input1,input2,...,inputn):
#very secret part of code
return(output1,ouput2,output3)
inputs=[1,31,22]
_res = optimize.minimize(function, inputs, method='nelder-mead',
options=options,*constraints*)
Now I believe that the answer should be in the documentation of the Scipy minimize function.
The fact is that, from what I've understood, I can only constrain my function inputs and not some of the outputs of my function. Am I missing something?
Here is an example of optimization with constraint:
from scipy.optimize import minimize
def f(xy):
x, y = xy
return x**2 + y**2
def constraint1(xy):
x, y = xy
return x-1
def constraint2(xy):
x, y = xy
return y-2
constraints = ({'type': 'ineq', 'fun': constraint1},
{'type': 'ineq', 'fun': constraint2})
xy_start = [0, 0]
res = minimize(f, xy_start, method='SLSQP', constraints=constraints)
which gives:
fun: 4.999999999999997
jac: array([2., 4.])
message: 'Optimization terminated successfully.'
nfev: 13
nit: 3
njev: 3
status: 0
success: True
x: array([1., 2.])
The idea is that your function you want to minimize is well scalar: the returned value have to be only output1. Additional functions have to be added in order to define each constraint: constraint1(inputs)->output2, constraint2(inputs)->output3... etc
Maybe you could separate the 'very secret part' of your code into different functions, which will be used both by the objective function and by the constraints functions

Python's scipy.optimize.minimize with SLSQP fails with "Positive directional derivative for linesearch"

I have a least squares minimization problem subject to inequality constraints which I am trying to solve using scipy.optimize.minimize. It seems that there are two options for inequality constraints: COBYLA and SLSQP.
I first tried SLSQP since it allow for explicit partial derivatives of the function to be minimized. Depending on the scaling of the problem, it fails with error:
Positive directional derivative for linesearch (Exit mode 8)
whenever interval or more general inequality constraints are imposed.
This has been observed previously e.g., here. Manual scaling of the function to be minimized (along with the associated partial derivatives) seems to get rid of the problem, but I cannot achieve the same effect by changing ftol in the options.
Overall, this whole thing is causing me to have doubts about the routine working in a robust manner. Here's a simplified example:
import numpy as np
import scipy.optimize as sp_optimize
def cost(x, A, y):
e = y - A.dot(x)
rss = np.sum(e ** 2)
return rss
def cost_deriv(x, A, y):
e = y - A.dot(x)
deriv0 = -2 * e.dot(A[:,0])
deriv1 = -2 * e.dot(A[:,1])
deriv = np.array([deriv0, deriv1])
return deriv
A = np.ones((10,2)); A[:,0] = np.linspace(-5,5, 10)
x_true = np.array([2, 2/20])
y = A.dot(x_true)
x_guess = x_true / 2
prm_bounds = ((0, 3), (0,1))
cons_SLSQP = ({'type': 'ineq', 'fun' : lambda x: np.array([x[0] - x[1]]),
'jac' : lambda x: np.array([1.0, -1.0])})
# works correctly
min_res_SLSQP = sp_optimize.minimize(cost, x_guess, args=(A, y), jac=cost_deriv, bounds=prm_bounds, method='SLSQP', constraints=cons_SLSQP, options={'disp': True})
print(min_res_SLSQP)
# fails
A = 100 * A
y = A.dot(x_true)
min_res_SLSQP = sp_optimize.minimize(cost, x_guess, args=(A, y), jac=cost_deriv, bounds=prm_bounds, method='SLSQP', constraints=cons_SLSQP, options={'disp': True})
print(min_res_SLSQP)
# works if bounds and inequality constraints removed
min_res_SLSQP = sp_optimize.minimize(cost, x_guess, args=(A, y), jac=cost_deriv,
method='SLSQP', options={'disp': True})
print(min_res_SLSQP)
How should ftol be set to avoid failure? More generally, can a similar problem arise with COBYLA? Is COBYLA a better choice for this type of inequality constrained least squares optimization problem?
Using a square root in the cost function was found to improve performance. However, for a non-linear re-paramterization of the problem (simpler but closer to what I need to do in practice), it fails again. Here are the details:
import numpy as np
import scipy.optimize as sp_optimize
def cost(x, y, g):
e = ((y - x[1]) / x[0]) - g
rss = np.sqrt(np.sum(e ** 2))
return rss
def cost_deriv(x, y, g):
e = ((y- x[1]) / x[0]) - g
factor = 0.5 / np.sqrt(e.dot(e))
deriv0 = -2 * factor * e.dot(y - x[1]) / (x[0]**2)
deriv1 = -2 * factor * np.sum(e) / x[0]
deriv = np.array([deriv0, deriv1])
return deriv
x_true = np.array([1/300, .1])
N = 20
t = 20 * np.arange(N)
g = 100 * np.cos(2 * np.pi * 1e-3 * (t - t[-1] / 2))
y = g * x_true[0] + x_true[1]
x_guess = x_true / 2
prm_bounds = ((1e-4, 1e-2), (0, .4))
# check derivatives
delta = 1e-9
C0 = cost(x_guess, y, g)
C1 = cost(x_guess + np.array([delta, 0]), y, g)
approx_deriv0 = (C1 - C0) / delta
C1 = cost(x_guess + np.array([0, delta]), y, g)
approx_deriv1 = (C1 - C0) / delta
approx_deriv = np.array([approx_deriv0, approx_deriv1])
deriv = cost_deriv(x_guess, y, g)
# fails
min_res_SLSQP = sp_optimize.minimize(cost, x_guess, args=(y, g), jac=cost_deriv,
bounds=prm_bounds, method='SLSQP', options={'disp': True})
print(min_res_SLSQP)
Instead of minimizing np.sum(e ** 2), minimize sqrt(np.sum(e ** 2)), or better (in terms of calculation): np.linalg.norm(e)!
This modification:
does not change your solution in regards to x
will need post-processing if the original objective is needed (probably not)
is much more robust
With this change, all cases work, even using numerical-differentiation (i was too lazy to modify the gradient, which needs to reflect this!).
Example output (number of func-evals gives away num-diff):
Optimization terminated successfully. (Exit mode 0)
Current function value: 3.815547437029837e-06
Iterations: 16
Function evaluations: 88
Gradient evaluations: 16
fun: 3.815547437029837e-06
jac: array([-6.09663382, -2.48862544])
message: 'Optimization terminated successfully.'
nfev: 88
nit: 16
njev: 16
status: 0
success: True
x: array([ 2.00000037, 0.10000018])
Optimization terminated successfully. (Exit mode 0)
Current function value: 0.0002354577991007501
Iterations: 23
Function evaluations: 114
Gradient evaluations: 23
fun: 0.0002354577991007501
jac: array([ 435.97259208, 288.7483819 ])
message: 'Optimization terminated successfully.'
nfev: 114
nit: 23
njev: 23
status: 0
success: True
x: array([ 1.99999977, 0.10000014])
Optimization terminated successfully. (Exit mode 0)
Current function value: 0.0003392807206384532
Iterations: 21
Function evaluations: 112
Gradient evaluations: 21
fun: 0.0003392807206384532
jac: array([ 996.57340243, 51.19298764])
message: 'Optimization terminated successfully.'
nfev: 112
nit: 21
njev: 21
status: 0
success: True
x: array([ 2.00000008, 0.10000104])
While there are probably some problems with SLSQP, it's still one of the most tested and robust codes given that broad application-spectrum!
I would also expect SLSQP to be much better here compared to COBYLA, as the latter is based heavily on linearizations. (but just take it as a guess; it's easy to try given the minimize-interface!)
Alternative
In general, an Interior-point based solver for Convex Quadratic Programming will be the best approach here. But for this, you need to leave scipy. (or maybe an SOCP-solver would be better... i'm not sure).
cvxpy brings a nice-modelling system and a good open-source solver (ECOS; although technically a conic-solver -> more general and less robust; but should beat SLSQP).
Using cvxpy and ECOS, this looks like:
import numpy as np
import cvxpy as cvx
""" Problem data """
A = np.ones((10,2)); A[:,0] = np.linspace(-5,5, 10)
x_true = np.array([2, 2/20])
y = A.dot(x_true)
x_guess = x_true / 2
prm_bounds = ((0, 3), (0,1))
# problematic case
A = 100 * A
y = A.dot(x_true)
""" Solve """
x = cvx.Variable(len(x_true))
constraints = [x[0] >= x[1]]
for ind, (lb, ub) in enumerate(prm_bounds): # ineffecient -> matrix-based expr better!
constraints.append(x[ind] >= lb)
constraints.append(x[ind] <= ub)
objective = cvx.Minimize(cvx.norm(A*x - y))
problem = cvx.Problem(objective, constraints)
problem.solve(solver=cvx.ECOS, verbose=False)
print(problem.status)
print(problem.value)
print(x.value.T)
# optimal
# -6.67593652593801e-10
# [[ 2. 0.1]]

Minimization in Python to find shortest path between two points

I'm trying to find the shortest path between two points, (0,0) and (1000,-100). The path is to be defined by a 7th order polynomial function:
p(x) = a0 + a1*x + a2*x^2 + ... + a7*x^7
To do so I tried to minimize the function that calculates the total path length from the polynomial function:
length = int from 0 to 1000 of { sqrt(1 + (dp(x)/dx)^2 ) }
Obviously the correct solution will be a linear line, however later on I want to add constraints to the problem. This one was supposed to be a first approach.
The code I implemented was:
import numpy as np
import matplotlib.pyplot as plt
import math
import sys
import scipy
def path_tracer(a,x):
return a[0] + a[1]*x + a[2]*x**2 + a[3]*x**3 + a[4]*x**4 + a[5]*x**5 + a[6]*x**6 + a[7]*x**7
def lof(a):
upper_lim = a[8]
L = lambda x: np.sqrt(1 + (a[1] + 2*a[2]*x + 3*a[3]*x**2 + 4*a[4]*x**3 + 5*a[5]*x**4 + 6*a[6]*x**5 + 7*a[7]*x**6)**2)
length_of_path = scipy.integrate.quad(L,0,upper_lim)
return length_of_path[0]
a = np.array([-4E-11, -.4146,.0003,-7e-8,0,0,0,0,1000]) # [polynomial parameters, x end point]
xx = np.linspace(0,1200,1200)
y = [path_tracer(a,x) for x in xx]
cons = ({'type': 'eq', 'fun': lambda x:path_tracer(a,a[8])+50})
c = scipy.optimize.minimize(lof, a, constraints = cons)
print(c)
When I ran it however the minimization routine fails and returns the initial parameters unchanged. The output is:
fun: 1022.9651540965604
jac: array([ 0.00000000e+00, -1.78130722e+02, -1.17327499e+05,
-7.62458172e+07, 9.42803815e+11, 9.99924786e+14,
9.99999921e+17, 1.00000000e+21, 1.00029755e+00])
message: 'Singular matrix C in LSQ subproblem'
nfev: 11
nit: 1
njev: 1
status: 6
success: False
x: array([ -4.00000000e-11, -4.14600000e-01, 3.00000000e-04,
-7.00000000e-08, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 1.00000000e+03])
Am I doing something wrong or is the routine just not appropriate to solve this kind of problems? If so, is there an alternative in Python?
You can use this routine, but there are some problems with your approach:
The domain of the polynomial should be normalized to something reasonable, like [0, 1]. This makes the optimization much easier. You can revert this after you are done with the optimization
You could simplify the code by using polyval and related functions
The optimal solution to this is quite obviously -0.1 x, so I'm not sure why you feel the need to optimize.
A solution that works is
import numpy as np
import scipy.optimize
x = np.linspace(0, 1, 1000)
def obj_fun(p):
deriv = np.polyval(np.polyder(p), x)
return np.sum(np.sqrt(1 + deriv ** 2))
cons = ({'type': 'eq', 'fun': lambda p: np.polyval(p, [0, 1]) - [0, -100]})
p0 = np.zeros(8)
c = scipy.optimize.minimize(obj_fun, p0, constraints = cons)
Where we can plot the result
import matplotlib.pyplot as plt
plt.plot(np.polyval(c.x, x), label='result')
plt.plot(-100 * x, label='optimal')
plt.legend()

Categories