Optimization with Python (scipy.optimize) - python

I am trying to maximize the following function using Python's scipy.optimize. However, after lots of trying, it doesn't seem to work. The function and my code are pasted below. Thanks for helping!
Problem
Maximize [sum (x_i / y_i)**gamma]**(1/gamma)
subject to the constraint sum x_i = 1; x_i is in the interval (0,1).
x is a vector of choice variables; y is a vector of parameters; gamma is a parameter. The xs must sum to one. And each x must be in the interval (0,1).
Code
def objective_function(x, y):
sum_contributions = 0
gamma = 0.2
for count in xrange(len(x)):
sum_contributions += (x[count] / y[count]) ** gamma
value = math.pow(sum_contributions, 1 / gamma)
return -value
cons = ({'type': 'eq', 'fun': lambda x: np.array([sum(x) - 1])})
y = [0.5, 0.3, 0.2]
initial_x = [0.2, 0.3, 0.5]
opt = minimize(objective_function, initial_x, args=(y,), method='SLSQP',
constraints=cons,bounds=[(0, 1)] * len(x))

Sometimes, numerical optimizer doesn't work for whatever reason. We can parametrize the problem slightly different and it will just work. (and might work faster)
For example, for bounds of (0,1), we can have a transform function such that values in (-inf, +inf), after being transformed, will end up in (0,1)
We can do a similar trick with the equality constraints. For example, we can reduce the dimension from 3 to 2, since the last element in x has to be 1-sum(x).
If it still won't work, we can switch to a optimizer that dose not require information from derivative, such as Nelder Mead.
And also there is Lagrange multiplier.
In [111]:
def trans_x(x):
x1 = x**2/(1+x**2)
z = np.hstack((x1, 1-sum(x1)))
return z
def F(x, y, gamma = 0.2):
z = trans_x(x)
return -(((z/y)**gamma).sum())**(1./gamma)
In [112]:
opt = minimize(F, np.array([0., 1.]), args=(np.array(y),),
method='Nelder-Mead')
opt
Out[112]:
status: 0
nfev: 96
success: True
fun: -265.27701747828007
x: array([ 0.6463264, 0.7094782])
message: 'Optimization terminated successfully.'
nit: 52
The result is:
In [113]:
trans_x(opt.x)
Out[113]:
array([ 0.29465097, 0.33482303, 0.37052601])
And we can visualize it, with:
In [114]:
x1 = np.linspace(0,1)
y1 = np.linspace(0,1)
X,Y = np.meshgrid(x1,y1)
Z = np.array([F(item, y) for item
in np.vstack((X.ravel(), Y.ravel())).T]).reshape((len(x1), -1), order='F')
Z = np.fliplr(Z)
Z = np.flipud(Z)
plt.contourf(X, Y, Z, 50)
plt.colorbar()

Even tough this questions is a bit dated I wanted to add an alternative solution which might be useful for others stumbling upon this question in the future.
It turns our your problem is solvable analytically. You can start by writing down the Lagrangian of the (equality constrained) optimization problem:
L = \sum_i (x_i/y_i)^\gamma - \lambda (\sum x_i - 1)
The optimal solution is found by setting the first derivative of this Lagrangian to zero:
0 = \partial L / \partial x_i = \gamma x_i^{\gamma-1}/\y_i - \lambda
=> x_i \propto y_i^{\gamma/(\gamma - 1)}
Using this insight the optimization problem can be solved simply and efficiently by:
In [4]:
def analytical(y, gamma=0.2):
x = y**(gamma/(gamma-1.0))
x /= np.sum(x)
return x
xanalytical = analytical(y)
xanalytical, objective_function(xanalytical, y)
Out [4]:
(array([ 0.29466774, 0.33480719, 0.37052507]), -265.27701765929692)
CT Zhu's solution is elegant but it might violate the positivity constraint on the third coordinate. For gamma = 0.2 this does not seem to be a problem in practice, but for different gammas you easily run into trouble:
In [5]:
y = [0.2, 0.1, 0.8]
opt = minimize(F, np.array([0., 1.]), args=(np.array(y), 2.0),
method='Nelder-Mead')
trans_x(opt.x), opt.fun
Out [5]:
(array([ 1., 1., -1.]), -11.249999999999998)
For other optimization problems with the same probability simplex constraints as your problem, but for which there is no analytical solution, it might be worth looking into projected gradient methods or similar. These methods leverage the fact that there is fast algorithm for the projection of an arbitrary point onto this set see https://en.wikipedia.org/wiki/Simplex#Projection_onto_the_standard_simplex.
(To see the complete code and a better rendering of the equations take a look at the Jupyter notebook http://nbviewer.jupyter.org/github/andim/pysnippets/blob/master/optimization-simplex-constraints.ipynb)

Related

Why does my optimization (scipy.optimize.minimize) not work and return the initial values instead?

I have a set of data; each column corresponds to a spectrum at a certain time. I want to fit the spectrum at a generic time (t_i) as a linear combination of the spectrum at time 0 (in the first column), at time 5 (in column 30) and time 35 (in column 210). So the equation I want to fit is:
S(t_i) = a * S(t_0) + b * S(t_5) + c * S(t_35)
where:
0 <= a, b, c <= 1
a + b + c = 1
I found the solution at this question (Minimizing Least Squares with Algebraic Constraints and Bounds) super useful. But when I try it with my set of data the results are obviously wrong. I tried modifying the method to 'Nelder-Mead' but it doesn't respect my bound so I get negative values.
This is my script:
t0= df.iloc[:,0] #Spectrum at time 0
t5 = df.iloc[:,30] # Spectrum at time 5
t35 = df.iloc[:,120] # Spectrum at time 35
ti= df.iloc[:,20]
# Bounds that make every coefficient be between 0 and 1
bnds = [(0, 1), (0, 1), (0, 1)]
# Constrain the sum of the coefficient to 1
cons = [{"type": "eq", "fun": lambda x: x[0] + x[1] + x[2] - 1}]
xinit = np.array([1, 0, 0])
fun = lambda x: np.sum((ti -(x[0] * t0 + x[1] * t5 + x[2] * t35))**2)
res = minimize(fun, xinit,method='Nelder-Mead', bounds=bnds, constraints=cons)
print(res.x)
If I use the Nelder-Mead method I get: Out: [ 0.02732053 1.01961422 -0.04504698] , if I don't specify the method I get: [1. 0. 0.] (I believe that in this case the SLSQP method is being used).
The data I'm referring to is similar to the following:
0 3.333 5 35.001
0.001045089 0.001109701 0.001169798 0.000725486
0.001083051 0.001138815 0.001176665 0.000713021
0.001090994 0.001142676 0.001186642 0.000716149
0.001096258 0.001156476 0.001190218 0.00071286
Can you identify the problem? Can you suggest other ways to solve this problem? I have also tried using least_squares, but it failed.
The result of a local optimization strongly depends on the initial values.
It might return [1, 0, 0] for the case you stated above because there simply was no possibility for the optimizer to find a "downhill-only" way to [0. 1. 0.].
In fact, you might have started in a local minima and all ways out of the dip went uphill. So the optimizer chose to stay. That's how these optimizers work.
Try
xinit = np.array([0.0, 1.0, 0.0])
for t_i = t5 and I am quite sure the optimizer will return the initial value.
For your case do what I stated here: Run the optimizer several times, each time pick random initial values inside your boundaries. You can pick the code posted there and just add your constraints, use SLSQP or trust-constr.

Inequality constrained least squares with python using 'COBYLA' algorithm

My goal is to minimize least squares (i.e. "fit" the function) with respect, that the returned function is non-decreasing, which means that the derivative in all points on I is >=0.
My function of choice is 4th degree polynomial function, i.e,
f(x) = a*x**4 + b*x**3 + c*x**2 + d*x + e
For this task, its best to use scipy.optimize.minimize method. I built very robust iteration algorithm, that search the points where my the result fuction from 'minimize' is decreasing, and sets the inequality constraint. For example, if in point x0 is f(x) decreasing, my constraint is:
4*a*x0**3 + 3*b*x0**2 + 2*c*x0 + d >= 0
For some of my data I succedet using 'SLSQP' optimization method WITH the inequality constraint as is described here. Its odd because the of minimize documentation of minimize states, that:
"Note that COBYLA only supports inequality constraints."
So my first question: 1] Is the tutorial from first link mistaken?
Even if it is right, it looks that I cant use 'SLSQP' for another datam because of 'incompatible constraints' issue from minimize process.
Now for the 'COBYLA algorithm' I want to use, because there could be some points, where f(x) is decreasing. Here is the sample code:
#STACK
#[ 1.01766416e-04, 1.80575564e-06, -7.51840485e-03, -7.51828086e-03, 9.84985357e-01]
import numpy as np
from scipy.optimize import minimize
def ecdf(arr):
arr = np.array(arr)
F = [len(arr[arr<=t]) / len(arr) for t in arr]
return np.array(F)
def der(args_pol, point):
a, b, c, d, e = args_pol
return (4*a*point**3 + 3*b*point**2 + 2*c*point + d)
def least_sq(args_pol, x, y):
a, b, c, d, e = args_pol
return ((y-(a*x**4 + b*x**3 + c*x**2 + d*x + e))**2).sum()
var = np.array([ 6.8, 6.9, 7. , 7.4, 7.4, 7.5, 7.5, 7.6, 7.7, 7.8, 8. ,
8. , 8. ])
ec = ecdf(var)
tip = [0., 0., 0., 0., 0.]
const = []
opt = minimize(least_sq, tip, method = 'COBYLA', args = (var, ec),
constraints = const)
The result from optimization proces is my second coment. The result function, if you plot it looks like this.
As you can see, the result function fits my data VERY poorly, even withouth any constraints. I saw similar behaviour for for data, where I needed some constraints as well, sometimes the result function event worse than this example. So my second question is:
2] Can anybody explain me, what I am doing wrong?

How to resolve function approximation task in Python?

Consider the complex mathematical function on the line [1, 15]:
f(x) = sin(x / 5) * exp(x / 10) + 5 * exp(-x / 2)
polynomial of degree n (w_0 + w_1 x + w_2 x^2 + ... + w_n x^n) is uniquely defined by any n + 1 different points through which it passes.
This means that its coefficients w_0, ... w_n can be determined from the following system of linear equations:
Where x_1, ..., x_n, x_ {n + 1} are the points through which the polynomial passes, and by f (x_1), ..., f (x_n), f (x_ {n + 1}) - values that it must take at these points.
I'm trying to form a system of linear equations (that is, specify the coefficient matrix A and the free vector b) for the polynomial of the third degree, which must coincide with the function f at points 1, 4, 10, and 15. Solve this system using the scipy.linalg.solve function.
A = numpy.array([[1., 1., 1., 1.], [1., 4., 8., 64.], [1., 10., 100., 1000.], [1., 15., 225., 3375.]])
V = numpy.array([3.25, 1.74, 2.50, 0.63])
numpy.linalg.solve(A, V)
I got the wrong answer, which is
So the question is: is the matrix correct?
No, your matrix is not correct.
The biggest mistake is your second sub-matrix for A. The third entry should be 4**2 which is 16 but you have 8. Less important, you have only two decimal places for your constants array V but you really should have more precision than that. Systems of linear equations are sometimes very sensitive to the provided values, so make them as precise as possible. Also, the rounding in your final three entries is bad: you rounded down but you should have rounded up. If you really want two decimal places (which I do not recommend) the values should be
V = numpy.array([3.25, 1.75, 2.51, 0.64])
But better would be
V = numpy.array([3.252216865271419, 1.7468459495903677,
2.5054164070002463, 0.6352214195786656])
With those changes to A and V I get the result
array([ 4.36264154, -1.29552587, 0.19333685, -0.00823565])
I get these two sympy plots, the first showing your original function and the second using the approximated cubic polynomial.
They look close to me! When I calculate the function values at 1, 4, 10, and 15, the largest absolute error is for 15, namely -4.57042132584462e-6. That is somewhat larger than I would have expected but probably is good enough.
Is it from data science course? :)
Here is an almost generic solution I did:
%matplotlib inline
import numpy as np;
import math;
import matplotlib.pyplot as plt;
def f(x):
return np.sin(x / 5) * np.exp(x / 10) + 5 * np.exp(-x / 2)
# approximate at the given points (feel free to experiment: change/add/remove)
points = np.array([1, 4, 10, 15])
n = points.size
# fill A-matrix, each row is 1 or xi^0, xi^1, xi^2, xi^3 .. xi^n
A = np.zeros((n, n))
for index in range(0, n):
A[index] = np.power(np.full(n, points[index]), np.arange(0, n, 1))
# fill b-matrix, i.e. function value at the given points
b = f(points)
# solve to get approximation polynomial coefficents
solve = np.linalg.solve(A,b)
# define the polynome approximation of the function
def polinom(x):
# Yi = solve * Xi where Xi = x^i
tiles = np.tile(x, (n, 1))
tiles[0] = np.ones(x.size)
for index in range(1, n):
tiles[index] = tiles[index]**index
return solve.dot(tiles)
# plot the graphs of original function and its approximation
x = np.linspace(1, 15, 100)
plt.plot(x, f(x))
plt.plot(x, polinom(x))
# print out the coefficients of polynome approximating our function
print(solve)

Numpy: Linear system with specific conditions. No negative solutions

I'm writing a Python code using numpy. In my code I use "linalg.solve" to solve a linear system of n equations in n variables. Of course the solutions could be either positive or negative. What I need to do is to have always positive solutions or at least equal to 0. To do so I first want the software to solve my linear system of equations in this form
x=np.linalg.solve(A,b)
in which x is an array with n variables in a specific order (x1, x2, x3.....xn),
A is a n dimensional square matrix and b is a n-dimensional array.
Now I thought to do this:
-solve the system of equations
-check if every x is positive
-if not, every negative x I'll want them to be =0 (for example x2=-2 ---->x2=0)
-with a generic xn=0 want to eliminate the n-row and the n-coloumn in the n dimensional square matrix A (I'll obtain another square matrix A1) and eliminate the n element in b obtaining b1.
-solve the system again with the matrix A1 and b1
-re-iterate untill every x is positive or zero
-at last build a final array of n elements in which I'll put the last iteration solutions and every variable which was equal to zero ( I NEED THEM IN ORDER AS IT WOULD HAVE BEEN NO ITERATIONS so if during the iterations it was x2=0 -----> xfinal=[x1, 0 , x3,.....,xn]
Think it 'll work but don't know how to do it in python.
Hope I was clear. Can't really figure it out!
You have a minimization problem, i.e.
min ||Ax - b||
s.t. x_i >= 0 for all i in [0, n-1]
You can use the Optimize module from Scipy
import numpy as np
from scipy.optimize import minimize
A = np.array([[1., 2., 3.],[4., 5., 6.],[7., 8., 10.]], order='C')
b = np.array([6., 12., 21.])
n = len(b)
# Ax = b --> x = [1., -2., 3.]
fun = lambda x: np.linalg.norm(np.dot(A,x)-b)
# xo = np.linalg.solve(A,b)
# sol = minimize(fun, xo, method='SLSQP', constraints={'type': 'ineq', 'fun': lambda x: x})
sol = minimize(fun, np.zeros(n), method='L-BFGS-B', bounds=[(0.,None) for x in xrange(n)])
x = sol['x'] # [2.79149722e-01, 1.02818379e-15, 1.88222298e+00]
With your method I get x = [ 0.27272727, 0., 1.90909091].
In the case you still want to use your algorithm, it is below
n = len(b)
x = np.linalg.solve(A,b)
pos = np.where(x>=0.)[0]
while len(pos) < n:
Ap = A[pos][:,pos]
bp = b[pos]
xp = np.linalg.solve(Ap, bp)
x = np.zeros(len(b))
x[pos] = xp
pos = np.where(x>=0.)[0]
But I don't recommend you to use it, you should use the minimize option.
Even faster and more reliable also using minimization is the method scipy.optimize.lsq_linear that is specially dedicated for linear optimization.
The bounds are transposed and use np.inf instead of None for the upper bound.
Working example:
from scipy.optimize import lsq_linear
n = A.shape[1]
res = lsq_linear(A, b, bounds=np.array([(0.,np.inf) for i in range(n)]).T, lsmr_tol='auto', verbose=1)
y = res.x
You provide a matrix A and a vector b that have the same number of rows (= the dimension of the range of A).

Minimizing a multivariable function with scipy. Derivative not known

I have a function which is actually a call to another program (some Fortran code). When I call this function (run_moog) I can parse 4 variables, and it returns 6 values. These values should all be close to 0 (in order to minimize). However, I combined them like this: np.sum(results**2). Now I have a scalar function. I would like to minimize this function, i.e. get the np.sum(results**2) as close to zero as possible.
Note: When this function (run_moog) takes the 4 input parameters, it creates an input file for the Fortran code that depends on these parameters.
I have tried several ways to optimize this from the scipy docs. But none works as expected. The minimization should be able to have bounds on the 4 variables. Here is an attempt:
from scipy.optimize import minimize # Tried others as well from the docs
x0 = 4435, 3.54, 0.13, 2.4
bounds = [(4000, 6000), (3.00, 4.50), (-0.1, 0.1), (0.0, None)]
a = minimize(fun_mmog, x0, bounds=bounds, method='L-BFGS-B') # I've tried several different methods here
print a
This then gives me
status: 0
success: True
nfev: 5
fun: 2.3194639999999964
x: array([ 4.43500000e+03, 3.54000000e+00, 1.00000000e-01,
2.40000000e+00])
message: 'CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL'
jac: array([ 0., 0., -54090399.99999981, 0.])
nit: 0
The third parameter changes slightly, while the others are exactly the same. Also there have been 5 function calls (nfev) but no iterations (nit). The output from scipy is shown here.
Couple of possibilities:
Try COBYLA. It should be derivative-free, and supports inequality constraints.
You can't use different epsilons via the normal interface; so try scaling your first variable by 1e4. (Divide it going in, multiply coming back out.)
Skip the normal automatic jacobian constructor, and make your own:
Say you're trying to use SLSQP, and you don't provide a jacobian function. It makes one for you. The code for it is in approx_jacobian in slsqp.py. Here's a condensed version:
def approx_jacobian(x,func,epsilon,*args):
x0 = asfarray(x)
f0 = atleast_1d(func(*((x0,)+args)))
jac = zeros([len(x0),len(f0)])
dx = zeros(len(x0))
for i in range(len(x0)):
dx[i] = epsilon
jac[i] = (func(*((x0+dx,)+args)) - f0)/epsilon
dx[i] = 0.0
return jac.transpose()
You could try replacing that loop with:
for (i, e) in zip(range(len(x0)), epsilon):
dx[i] = e
jac[i] = (func(*((x0+dx,)+args)) - f0)/e
dx[i] = 0.0
You can't provide this as the jacobian to minimize, but fixing it up for that is straightforward:
def construct_jacobian(func,epsilon):
def jac(x, *args):
x0 = asfarray(x)
f0 = atleast_1d(func(*((x0,)+args)))
jac = zeros([len(x0),len(f0)])
dx = zeros(len(x0))
for i in range(len(x0)):
dx[i] = epsilon
jac[i] = (func(*((x0+dx,)+args)) - f0)/epsilon
dx[i] = 0.0
return jac.transpose()
return jac
You can then call minimize like:
minimize(fun_mmog, x0,
jac=construct_jacobian(fun_mmog, [1e0, 1e-4, 1e-4, 1e-4]),
bounds=bounds, method='SLSQP')
It sounds like your target function doesn't have well-behaving derivatives. The line in the output jac: array([ 0., 0., -54090399.99999981, 0.]) means that changing only the third variable value is significant. And because the derivative w.r.t. to this variable is virtually infinite, there is probably something wrong in the function. That is also why the third variable value ends up in its maximum.
I would suggest that you take a look at the derivatives, at least in a few points in your parameter space. Compute them using finite differences and the default step size of SciPy's fmin_l_bfgs_b, 1e-8. Here is an example of how you could compute the derivates.
Try also plotting your target function. For instance, keep two of the parameters constant and let the two others vary. If the function has multiple local optima, you shouldn't use gradient-based methods like BFGS.
How difficult is it to get an analytical expression for the gradient? If you have that you can then approximate the product of Hessian with a vector using finite difference. Then you can use other optimization routines available.
Among the various optimization routines available in SciPy, the one called TNC (Newton Conjugate Gradient with Truncation) is quite robust to the numerical values associated with the problem.
The Nelder-Mead Simplex Method (suggested by Cristián Antuña in the comments above) is well known to be a good choice for optimizing (posibly ill-behaved) functions with no knowledge of derivatives (see Numerical Recipies In C, Chapter 10).
There are two somewhat specific aspects to your question. The first is the constraints on the inputs, and the second is a scaling problem. The following suggests solutions to these points, but you might need to manually iterate between them a few times until things work.
Input Constraints
Assuming your input constraints form a convex region (as your examples above indicate, but I'd like to generalize it a bit), then you can write a function
is_in_bounds(p):
# Return if p is in the bounds
Using this function, assume that the algorithm wants to move from point from_ to point to, where from_ is known to be in the region. Then the following function will efficiently find the furthermost point on the line between the two points on which it can proceed:
from numpy.linalg import norm
def progress_within_bounds(from_, to, eps):
"""
from_ -- source (in region)
to -- target point
eps -- Eucliedan precision along the line
"""
if norm(from_, to) < eps:
return from_
mid = (from_ + to) / 2
if is_in_bounds(mid):
return progress_within_bounds(mid, to, eps)
return progress_within_bounds(from_, mid, eps)
(Note that this function can be optimized for some regions, but it's hardly worth the bother, as it doesn't even call your original object function, which is the expensive one.)
One of the nice aspects of Nelder-Mead is that the function does a series of steps which are so intuitive. Some of these points can obviously throw you out of the region, but it's easy to modify this. Here is an implementation of Nelder Mead with modifications made marked between pairs of lines of the form ##################################################################:
import copy
'''
Pure Python/Numpy implementation of the Nelder-Mead algorithm.
Reference: https://en.wikipedia.org/wiki/Nelder%E2%80%93Mead_method
'''
def nelder_mead(f, x_start,
step=0.1, no_improve_thr=10e-6, no_improv_break=10, max_iter=0,
alpha = 1., gamma = 2., rho = -0.5, sigma = 0.5):
'''
#param f (function): function to optimize, must return a scalar score
and operate over a numpy array of the same dimensions as x_start
#param x_start (numpy array): initial position
#param step (float): look-around radius in initial step
#no_improv_thr, no_improv_break (float, int): break after no_improv_break iterations with
an improvement lower than no_improv_thr
#max_iter (int): always break after this number of iterations.
Set it to 0 to loop indefinitely.
#alpha, gamma, rho, sigma (floats): parameters of the algorithm
(see Wikipedia page for reference)
'''
# init
dim = len(x_start)
prev_best = f(x_start)
no_improv = 0
res = [[x_start, prev_best]]
for i in range(dim):
x = copy.copy(x_start)
x[i] = x[i] + step
score = f(x)
res.append([x, score])
# simplex iter
iters = 0
while 1:
# order
res.sort(key = lambda x: x[1])
best = res[0][1]
# break after max_iter
if max_iter and iters >= max_iter:
return res[0]
iters += 1
# break after no_improv_break iterations with no improvement
print '...best so far:', best
if best < prev_best - no_improve_thr:
no_improv = 0
prev_best = best
else:
no_improv += 1
if no_improv >= no_improv_break:
return res[0]
# centroid
x0 = [0.] * dim
for tup in res[:-1]:
for i, c in enumerate(tup[0]):
x0[i] += c / (len(res)-1)
# reflection
xr = x0 + alpha*(x0 - res[-1][0])
##################################################################
##################################################################
xr = progress_within_bounds(x0, x0 + alpha*(x0 - res[-1][0]), prog_eps)
##################################################################
##################################################################
rscore = f(xr)
if res[0][1] <= rscore < res[-2][1]:
del res[-1]
res.append([xr, rscore])
continue
# expansion
if rscore < res[0][1]:
xe = x0 + gamma*(x0 - res[-1][0])
##################################################################
##################################################################
xe = progress_within_bounds(x0, x0 + gamma*(x0 - res[-1][0]), prog_eps)
##################################################################
##################################################################
escore = f(xe)
if escore < rscore:
del res[-1]
res.append([xe, escore])
continue
else:
del res[-1]
res.append([xr, rscore])
continue
# contraction
xc = x0 + rho*(x0 - res[-1][0])
##################################################################
##################################################################
xc = progress_within_bounds(x0, x0 + rho*(x0 - res[-1][0]), prog_eps)
##################################################################
##################################################################
cscore = f(xc)
if cscore < res[-1][1]:
del res[-1]
res.append([xc, cscore])
continue
# reduction
x1 = res[0][0]
nres = []
for tup in res:
redx = x1 + sigma*(tup[0] - x1)
score = f(redx)
nres.append([redx, score])
res = nres
Note This implementation is GPL, which is either fine for you or not. It's extremely easy to modify NM from any pseudocode, though, and you might want to throw in simulated annealing in any case.
Scaling
This is a trickier problem, but jasaarim has made an interesting point regarding that. Once the modified NM algorithm has found a point, you might want to run matplotlib.contour while fixing a few dimensions, in order to see how the function behaves. At this point, you might want to rescale one or more of the dimensions, and rerun the modified NM.
–

Categories