I would like to minimize an objective function which calls an simulation software in every step and returns a scalar. Is there any way to restrict the result of the objective function? For example I would like to get the values of the variables which bring the result as closest to 1 as possible.
I tried to simply subtract 1 from the result of the objective function but that didn't help. I also played around with coinstraints but if I understand it corretly they are only for the input variables. Another way could be to create an log which stores the values of all variables after every iteration (which I'm doing already). In the end it should be possible to search for the iteration which had a result closest to 1 and return it's variable configuration. The problem is that the minimization probably runs way too long and creates useless results. Is there any better way?
def objective(data):
"""
Optimization Function
:param data: list containing the current guess (list of float values)
:return: each iteration returns a scalar which should be minimized
"""
# do simulation and calculate scalar
return result - 1.0 # doesn't work since result is becoming negative
def optimize(self):
"""
daemon which triggers input, reads output and optimizes results
:return: optimized results
"""
# initialize log, initial guess etc.
sol = minimize(self.objective, x0, method='SLSQP', options={'eps': 1e-3, 'ftol': 1e-9}, bounds=boundList)
The goal is to find a solution which can be adapted to any target value. The user should be able to enter a value and the minimization will return the best variable configuration for this target value.
As discussed in the comments, one way of achieving this is to use
return (result - 1.0) ** 2
in objective. Then the results cannot become negative and the optimization will try to find result in such a way that it is close to your target value (e.g. 1.0 in your case).
Illustration, using first your current set-up:
from scipy.optimize import minimize
def objective(x, target_value):
# replace this by your actual calculations
result = x - 9.0
return result - target_value
# add some bounds for all the parameters you have
bnds = [(-100, 100)]
# target_value is passed in args; feel free to add more
res = minimize(objective, (1), args=(1.0,), bounds=bnds)
if res.success:
# that's the optimal x
print(f"optimal x: {res.x[0]}")
else:
print("Sorry, the optimization was not successful. Try with another initial"
" guess or optimization method")
As we chose -100 as the lower bound for x and ask to minimize the objective, the optimal x is -100 (will be printed if you run the code from above). If we now replace the line
return result - target_value
by
return (result - target_value) ** 2
and leave the rest unchanged, the optimal x is 10 as expected.
Please note that I pass your target value as additional argument so that your function is slightly more flexible.
Related
I need to minimise a input value (x) until the ouput value (y) meets the desired value exactly or it is of by some decimals.
I know that I can use the scipy.optimisation.minimize function but I am having troubles to set it up properly.
optimize_fun_test(100)
this is my written function which for a specific value (in this case price) calculates the margin i will have when I sell my product.
Now from my understanding I need to set a constraint which holds the minimum margin which I want to target, as my goal is to sell the product as cheap as possible but I need to have a minimum margin that I have to target, in order to be profitable.
So let's say that my function calculates a margin of 25% (0.25) when i sell my product for 100Euro, and my margin that I wish to target is 12.334452% (0.1233452), the price will be lower. So the minimize function should be able to find the exact price for which I would need to sell it in order to meet my desired margin of 0.1233452, or at least very close to it, let's say the price for which I will sell it will generate a margin which is max of by 0.000001 from the desired margin.
optimize_fun_test(price)
s_price = 100
result = minimize(optimize_fun_test, s_price)
so this is what I got right now, I know it's not a lot but I don't know how to continue with the constraints.
From youtube videos I learnt that there are equality and inequality constraints, so I guess I want to have an equality constraint right ?
should the constraint look like this ?
cons=({'type','eq','fun' : lambda x_start: x_start-0.1233452})
Minimizing f(x) subject to f(x) = c is the same as just solving the equation f(x) = c which in turn is equivalent to finding the root of the function q(x) = f(x) - c. Long story short, use scipy.optimize.root instead:
from scipy.optimize import root
def eq_to_solve(x):
return optimize_fun_test(x) - 0.1233452
x0 = np.array([100.0]) # initial guess (please adapt the dimension for your problem)
sol = root(eq_to_solve, x0=x0)
In case you really want to solve it as a optimization problem (which doesn't make sense from a mathematical point of view)
from scipy.optimize import minimize
constr = {'type': 'eq', 'fun': lambda x: optimize_fun_test(x) - 0.1233452}
x0 = np.array([100.0]) # your initial guess
res = minimize(optimize_fun_test, x0=x0, constraints=constr)
I am very new to Pyomo, working on a use case where my objective function coefficient is dynamic & needs a min-max function.
Objective function = Max( sum (P * UC) - sum ( P - min(P)) * UC
where P is variable needs to be optimized and UC is function which is derived value based on some calculation.
I have few doubts
how to use min or max function in objective function, I have tried np.min or calling function but it gives error since function has if else condition
I have tried multiple things but none seems to be working. If someone can help me with dummy code that will be great.
Thanks in Advance.
Min could be implemented by defining a new variable min_P, which needs to be smaller than any of the P, expressed by constraints:
min_P <= P[i] for all i
This will make sure, that min_P is not larger than the smallest of the P. Then you can just use min_P in your objective function. I assume you know how to define constraints like this. This might result in an unbound variable problem, depending on how exactly you are optimizing, but this should put you on the right track.
The max case is analogous, if you define another value for the expression sum (P * UC) - sum ( P - min(P)).
It is not clear whether UC is a parameter or a variable itself (calculated in another constraint). In the latter case, the whole problem will be highly nonlinear and should be reconsidered.
I do not understand your AbstractModel vs ConcreteModel question. If you have the data available, use a ConcreteModel. Apart from that, see here.
We have a specific set of equations whose solutions are highly dependent on the input guesses. If we use verbose given in the docs it displays a whole lot of extra information. We only want the error, or to be precise the RESIDUE. How do we obtain that obtain that?
For instance, consider the following code snippet:
import mpmath as mp
def f(x):
return [#some function of x]
y = mp.findroot(f, x0 = [1 + 1j])
print(y)
If we run the code, we get the following error:
Could not find root within given tolerance. (0.037331322115722662107 > 2.16840434497100886801e-19)
Try another starting point or tweak arguments. (the actual code has many more variables unlike the above code)
Now, we can silence this warning by setting the argument verify = False, as given in the docs.
In which case, we do get the output value, but this is not the exact output, and it has some error/residue associated with it.
Now, if we were to set up a loop and input a whole variety of starting guesses x0's, one can get as the output an arrays of corresponding y's. however, can we also get the error/residue committed in the mp.findroot solver corresponding to each y?
For instance, it would be nice if there was something like
z = mp.findroot.error(f, x0)
Thus for each guess x0, we could get a corresponding y and a corresponding z, and this would allow us to pick from all the initial guesses, which is the best one(i.e the one which produces the smallest residue)
Is there any way of finding the explicit value of this residue, and storing it in a variable?
I don't think its possible to do it in python, but instead if you want the same you can try to do this in Matlab using function handles.
fun = #(x) exp(-exp(-x)) - x; % function
x0 = [0 1]; % initial interval
options = optimset('Display','final'); % show final answer
[x fval exitflag output] = fzero(fun,x0,options)
Here x will be value and fval will be the error in the value.
I need some help writing a pretty simple code (at least in pseudo code):
I want fit data using a polynomial of order n, where n is a parameter and should be changable. On top of that I would like to always keep the first three coefficients fixed to be zero. So I need something like
order = 5
def poly(x,c0=0,c1=0,c2=0,c3,c4,c5):
return numpy.polynomial.polynomial.polyval(x, [c0,c1,c2,c3,c4,c5], tensor=False)
popt, pcov = scipy.optimize.curve_fit(poly,x,y)
So problems I can not sove atm is:
How do I create a polynomial function with n number of coefficents? I basicly need to create a list of variables of length n.
If that is solved than we could put c0 to c2 to 0.
I hope I was able to make myself clear, if not please help me to refine my question.
You currently do not keep the first 3 coefficient fixed to 0, you just give them a default value.
Arbitrary argument lists seem to be what you are looking for:
def poly(x,*args):
return numpy.polynomial.polynomial.polyval(x, [0,0,0] + list(args), tensor=False)
If the number of arguments MUST be of fixed length (for instance n), you can check len(args) and raise an error if necessary.
Calling poly(x,a,b,c) now returns the polynomial function with the coefficients [0,0,0,a,b,c]
You can find more information in Python's documentation: https://docs.python.org/3/tutorial/controlflow.html#more-on-defining-functions
I am new to scipy.optimize module. I am using its minimize function trying to find a x to minimize a multivariate function, which takes matrix input but return a scalar value. I have one equality constraint and one inequality constraint, both taking vector input and return vector values. Particularly, here is the list of constraints:
sum(x) = 1 ;
AST + np.log2(x) >= 0
where AST is just a parameter. I defined my constraint functions as below:
For equality constraint: lambda x: sum(x) - 1
For inequality constraint:
def asset_cons(x):
#global AST
if np.logical_and.reduce( (AST + np.log2(x)) >= 0):
return 0.01
else:
return -1
Then I call
cons = ({'type':'eq', 'fun': lambda x: sum(x) - 1},
{'type':'ineq', 'fun': asset_cons})
res = optimize.minize(test_obj, [0.2, 0.8], constraints = cons)
But I still got error complaining my constraint function. Is it allowed to return vector value for constraint function or I have to return a scalar in order to use this minimize function?
Could anyone help me to see if the way I specify the constraints has any problems?
In principle this does not look that wrong at all. However, it is a bit difficult to say without seeing something about test_obj and the actual error. Does it throw an exception (which hints at a programming error) or complain about convergence (which hints at a mathematical challenge)?
You have the basic idea right; you need to have a function accepting an input vector with N elements and returning the value to be minimized. Your boundary conditions should also accept the same input vector and return a single scalar as their output.
To my eye there is something wrong with your boundary conditions. The first one (sum(x) - 1) is fine, but the second one is mathematically challenging, as you have defined it as a stepwise function. Many optimization algorithms want to have continuous functions with preferable quite smooth behaviour. (I do not know if the algorithms used by this function handle stepwise functions well, so this is just a guess.
If the above holds true, you might make things easier by, for example:
np.amin(AST + np.log2(x))
The function will be non-negative if all AST + log2(x[n]) >= 0. (It is still not extremely smooth, but if that is a problem it is easy to improve.) And now it'll also fit into one lambda.
If you have difficulties in convergence, you should probably try both COBYLA and SLSQP, unless you already know that one of them is better for your problem.