I am new to scipy.optimize module. I am using its minimize function trying to find a x to minimize a multivariate function, which takes matrix input but return a scalar value. I have one equality constraint and one inequality constraint, both taking vector input and return vector values. Particularly, here is the list of constraints:
sum(x) = 1 ;
AST + np.log2(x) >= 0
where AST is just a parameter. I defined my constraint functions as below:
For equality constraint: lambda x: sum(x) - 1
For inequality constraint:
def asset_cons(x):
#global AST
if np.logical_and.reduce( (AST + np.log2(x)) >= 0):
return 0.01
else:
return -1
Then I call
cons = ({'type':'eq', 'fun': lambda x: sum(x) - 1},
{'type':'ineq', 'fun': asset_cons})
res = optimize.minize(test_obj, [0.2, 0.8], constraints = cons)
But I still got error complaining my constraint function. Is it allowed to return vector value for constraint function or I have to return a scalar in order to use this minimize function?
Could anyone help me to see if the way I specify the constraints has any problems?
In principle this does not look that wrong at all. However, it is a bit difficult to say without seeing something about test_obj and the actual error. Does it throw an exception (which hints at a programming error) or complain about convergence (which hints at a mathematical challenge)?
You have the basic idea right; you need to have a function accepting an input vector with N elements and returning the value to be minimized. Your boundary conditions should also accept the same input vector and return a single scalar as their output.
To my eye there is something wrong with your boundary conditions. The first one (sum(x) - 1) is fine, but the second one is mathematically challenging, as you have defined it as a stepwise function. Many optimization algorithms want to have continuous functions with preferable quite smooth behaviour. (I do not know if the algorithms used by this function handle stepwise functions well, so this is just a guess.
If the above holds true, you might make things easier by, for example:
np.amin(AST + np.log2(x))
The function will be non-negative if all AST + log2(x[n]) >= 0. (It is still not extremely smooth, but if that is a problem it is easy to improve.) And now it'll also fit into one lambda.
If you have difficulties in convergence, you should probably try both COBYLA and SLSQP, unless you already know that one of them is better for your problem.
Related
I am very new to Pyomo, working on a use case where my objective function coefficient is dynamic & needs a min-max function.
Objective function = Max( sum (P * UC) - sum ( P - min(P)) * UC
where P is variable needs to be optimized and UC is function which is derived value based on some calculation.
I have few doubts
how to use min or max function in objective function, I have tried np.min or calling function but it gives error since function has if else condition
I have tried multiple things but none seems to be working. If someone can help me with dummy code that will be great.
Thanks in Advance.
Min could be implemented by defining a new variable min_P, which needs to be smaller than any of the P, expressed by constraints:
min_P <= P[i] for all i
This will make sure, that min_P is not larger than the smallest of the P. Then you can just use min_P in your objective function. I assume you know how to define constraints like this. This might result in an unbound variable problem, depending on how exactly you are optimizing, but this should put you on the right track.
The max case is analogous, if you define another value for the expression sum (P * UC) - sum ( P - min(P)).
It is not clear whether UC is a parameter or a variable itself (calculated in another constraint). In the latter case, the whole problem will be highly nonlinear and should be reconsidered.
I do not understand your AbstractModel vs ConcreteModel question. If you have the data available, use a ConcreteModel. Apart from that, see here.
I have a hyperbolic function and i need to find the 0 of it. I have tried various classical methods (bisection, newton and so on).
Second derivatives are continuous but not accessible analytically, so i have to exclude methods using them.
For the purpose of my application Newton method is the only one providing sufficient speed but it's relatively unstable if I'm not close enough to the actual zero. Here is a simple screenshot:
The zero is somewhere around 0.05. and since the function diverges at 0, if i take a initial guess value greater then the minimum location of a certain extent, then i obviously have problems with the asymptote.
Is there a more stable method in this case that would eventually offer speeds comparable to Newton?
I also thought of transforming the function in an equivalent better function with the same zero and only then applying Newton but I don't really know which transformations I can do.
Any help would be appreciated.
Dekker's or Brent's method should be almost as fast as Newton. If you want something simple to implement yourself, the Illinois variant of the regula-falsi method is also reasonably fast. These are all bracketing methods, so should not leave the domain if the initial interval is inside the domain.
def illinois(f,a,b,tol=1e-8):
'''regula falsi resp. false postion method with
the Illinois anti-stalling variation'''
fa = f(a)
fb = f(b)
if abs(fa)<abs(fb): a,fa,b,fb = b,fb,a,fa
while(np.abs(b-a)>tol):
c = (a*fb-b*fa)/(fb-fa)
fc = f(c)
if fa*fc < 0:
fa *= 0.5
else:
a, fa = b, fb
b, fb = c, fc
return b, fb
How about using log(x) instead of x?
For your case, #sams-studio's answer might work, and I would try that first. In a similar situation - also in multi-variate context - I used Newton-homotopy methods.
Basically, you limit the Newton step until the absolute value of y is descending.
The cheapest way to implement is that you half the Newton step if y increases from the last step. After a few steps, you're back at Newton with full second order convergence.
Disclamer: If you can bound your solution (you know a maximal x), the answer from #Lutz Lehmann would also be my first choice.
I would like to minimize an objective function which calls an simulation software in every step and returns a scalar. Is there any way to restrict the result of the objective function? For example I would like to get the values of the variables which bring the result as closest to 1 as possible.
I tried to simply subtract 1 from the result of the objective function but that didn't help. I also played around with coinstraints but if I understand it corretly they are only for the input variables. Another way could be to create an log which stores the values of all variables after every iteration (which I'm doing already). In the end it should be possible to search for the iteration which had a result closest to 1 and return it's variable configuration. The problem is that the minimization probably runs way too long and creates useless results. Is there any better way?
def objective(data):
"""
Optimization Function
:param data: list containing the current guess (list of float values)
:return: each iteration returns a scalar which should be minimized
"""
# do simulation and calculate scalar
return result - 1.0 # doesn't work since result is becoming negative
def optimize(self):
"""
daemon which triggers input, reads output and optimizes results
:return: optimized results
"""
# initialize log, initial guess etc.
sol = minimize(self.objective, x0, method='SLSQP', options={'eps': 1e-3, 'ftol': 1e-9}, bounds=boundList)
The goal is to find a solution which can be adapted to any target value. The user should be able to enter a value and the minimization will return the best variable configuration for this target value.
As discussed in the comments, one way of achieving this is to use
return (result - 1.0) ** 2
in objective. Then the results cannot become negative and the optimization will try to find result in such a way that it is close to your target value (e.g. 1.0 in your case).
Illustration, using first your current set-up:
from scipy.optimize import minimize
def objective(x, target_value):
# replace this by your actual calculations
result = x - 9.0
return result - target_value
# add some bounds for all the parameters you have
bnds = [(-100, 100)]
# target_value is passed in args; feel free to add more
res = minimize(objective, (1), args=(1.0,), bounds=bnds)
if res.success:
# that's the optimal x
print(f"optimal x: {res.x[0]}")
else:
print("Sorry, the optimization was not successful. Try with another initial"
" guess or optimization method")
As we chose -100 as the lower bound for x and ask to minimize the objective, the optimal x is -100 (will be printed if you run the code from above). If we now replace the line
return result - target_value
by
return (result - target_value) ** 2
and leave the rest unchanged, the optimal x is 10 as expected.
Please note that I pass your target value as additional argument so that your function is slightly more flexible.
I have an optimization problem with constraints, but the COBYLA solver doesn't seem to respect the constraints I specify.
My optimization problem:
cons = ({'type':'ineq', 'fun':lambda t: t},) # all variables must be positive
minimize(lambda t: -stateEst(dict(zip(self.edgeEvents.keys(),t)), (0.1,)*len(self.edgeEvents), constraints=cons, method='COBYLA')
and stateEst is defined as:
def stateEst(t):
val = 0
for edge,nextState in self.edgeEvents.iteritems():
val += edge_probability(self,edge,ts) * estimates[nextState]
val += node_probability(self, edge.head, ts, edge_list=[edge])* cost
for node,nextState in self.nodeEvents.iteritems():
val += node_probability(self, node, ts) * \
(estimates[nextState] + cost*len([e for e in node.incoming if e in self.compEdges])
return val
The probability functions are only defined for positive t values. The dictionary is necessary because the probabilities are calculated with respect to the 'named' t-values.
When I run this, I notice that COBYLA tries a value of -0.025 for one of the t-values. Why is the optimization not respecting the constraints?
COBYLA is technically speaking an infeasible method, which means, that the iterates might not be always feasible in regards to your constraints! (it's only about the final convergence, where feasibility matters for these algorithms).
Using an objective-function which is not defined everywhere will be problematic. Maybe you are forced to switch to some feasible method.
Alternatively you could think about generalizing your objective, so that there are penalties introduced for negative t's. But this is problem-dependent and could introduce other problems as well (convergence; numeric-stability).
Try using L-BFGS-B, which is limited to bound-constraints, which is not a problem here (for your current problem!).
For something this simple, just redefine your function to take any real value by taking np.exp(t) or even t**2, then take the log (or square root) of the solution.
Using excel solver, it is easy to find a solution (optimum value for x and y )for this equation:
(x*14.80461) + (y * -4.9233) + (10*0.4803) ≈ 0
However, I can't figure out how to do this in Python. The existing scipy optimize library function like fsolve() or leastsq() seems to work with only one variable.... (I might just not know how to use them)...
Any suggestions?
Thanks!
>>> def f(x):
... return x[0]*14.80461 + x[1]*(-4.9233) + x[2]*(10*0.4803)
>>> def vf(x):
... return [f(x), 0, 0]
>> xx = fsolve(vf, x0=[0,0,1])
>>>
>>> f(xx)
8.8817841970012523e-16
Since the solution is not unique, different initial values for an unknown lead to different (valid) solutions.
EDIT: Why this works. Well, it's a dirty hack. It's just that fsolve and its relatives deal with systems of equations. What I did here, I defined a system of three equations (f(x) returns a three-element list) for three variables (x has three elements). Now fsolve uses a Newton-type algorithm to converge to a solution.
Clearly, the system is underdefined: you can specify arbitrary values of two variables, say, x[1] and x[2] and find x[0] to satisfy the only non-trivial equation you have. You can see this explicitly by specifying a couple of initial guesses for x0 and see different outputs, all of which satisfy f(x)=0 up to a certain tolerance.