Python Minimizing a function until it equals a desired value - python

I need to minimise a input value (x) until the ouput value (y) meets the desired value exactly or it is of by some decimals.
I know that I can use the scipy.optimisation.minimize function but I am having troubles to set it up properly.
optimize_fun_test(100)
this is my written function which for a specific value (in this case price) calculates the margin i will have when I sell my product.
Now from my understanding I need to set a constraint which holds the minimum margin which I want to target, as my goal is to sell the product as cheap as possible but I need to have a minimum margin that I have to target, in order to be profitable.
So let's say that my function calculates a margin of 25% (0.25) when i sell my product for 100Euro, and my margin that I wish to target is 12.334452% (0.1233452), the price will be lower. So the minimize function should be able to find the exact price for which I would need to sell it in order to meet my desired margin of 0.1233452, or at least very close to it, let's say the price for which I will sell it will generate a margin which is max of by 0.000001 from the desired margin.
optimize_fun_test(price)
s_price = 100
result = minimize(optimize_fun_test, s_price)
so this is what I got right now, I know it's not a lot but I don't know how to continue with the constraints.
From youtube videos I learnt that there are equality and inequality constraints, so I guess I want to have an equality constraint right ?
should the constraint look like this ?
cons=({'type','eq','fun' : lambda x_start: x_start-0.1233452})

Minimizing f(x) subject to f(x) = c is the same as just solving the equation f(x) = c which in turn is equivalent to finding the root of the function q(x) = f(x) - c. Long story short, use scipy.optimize.root instead:
from scipy.optimize import root
def eq_to_solve(x):
return optimize_fun_test(x) - 0.1233452
x0 = np.array([100.0]) # initial guess (please adapt the dimension for your problem)
sol = root(eq_to_solve, x0=x0)
In case you really want to solve it as a optimization problem (which doesn't make sense from a mathematical point of view)
from scipy.optimize import minimize
constr = {'type': 'eq', 'fun': lambda x: optimize_fun_test(x) - 0.1233452}
x0 = np.array([100.0]) # your initial guess
res = minimize(optimize_fun_test, x0=x0, constraints=constr)

Related

Variation on linear programming?

I'm trying to find an existing algorithm for the following problem:
i.e, let's say we have 3 variables, x, y, z (all must be integers).
I want to find values for all variables that MUST match some constraints, such as x+y<4, x<=50, z>x, etc.
In addition, there are extra POSSIBLE constraints, like y>=20, etc. (same as before).
The objective function (which i'm intrested in maximizing its value) is the number of EXTRA constraints that are met in the optimal solution (the "must" constraints + the fact that all values are integers, is a demand. without it, there's no valid solution).
If using OR-Tools, as the model is integral, I would recommend using CP-SAT, as it offers indicator constraints with a nice API.
The API would be:
b = model.NewBoolVar('indicator variable')
model.Add(x + 2 * y >= 5).OnlyEnforceIf(b)
...
model.Maximize(sum(indicator_variables))
To get maximal performance, I would recommend using parallelism.
solver = cp_model.CpSolver()
solver.parameters.log_search_progress = True
solver.parameters.num_search_workers = 8 # or more on a bigger computer
status = solver.Solve(model)

How to use min & max in objective function in pyomo

I am very new to Pyomo, working on a use case where my objective function coefficient is dynamic & needs a min-max function.
Objective function = Max( sum (P * UC) - sum ( P - min(P)) * UC
where P is variable needs to be optimized and UC is function which is derived value based on some calculation.
I have few doubts
how to use min or max function in objective function, I have tried np.min or calling function but it gives error since function has if else condition
I have tried multiple things but none seems to be working. If someone can help me with dummy code that will be great.
Thanks in Advance.
Min could be implemented by defining a new variable min_P, which needs to be smaller than any of the P, expressed by constraints:
min_P <= P[i] for all i
This will make sure, that min_P is not larger than the smallest of the P. Then you can just use min_P in your objective function. I assume you know how to define constraints like this. This might result in an unbound variable problem, depending on how exactly you are optimizing, but this should put you on the right track.
The max case is analogous, if you define another value for the expression sum (P * UC) - sum ( P - min(P)).
It is not clear whether UC is a parameter or a variable itself (calculated in another constraint). In the latter case, the whole problem will be highly nonlinear and should be reconsidered.
I do not understand your AbstractModel vs ConcreteModel question. If you have the data available, use a ConcreteModel. Apart from that, see here.

Restricting result of scipy minimization (SLSQP)

I would like to minimize an objective function which calls an simulation software in every step and returns a scalar. Is there any way to restrict the result of the objective function? For example I would like to get the values of the variables which bring the result as closest to 1 as possible.
I tried to simply subtract 1 from the result of the objective function but that didn't help. I also played around with coinstraints but if I understand it corretly they are only for the input variables. Another way could be to create an log which stores the values of all variables after every iteration (which I'm doing already). In the end it should be possible to search for the iteration which had a result closest to 1 and return it's variable configuration. The problem is that the minimization probably runs way too long and creates useless results. Is there any better way?
def objective(data):
"""
Optimization Function
:param data: list containing the current guess (list of float values)
:return: each iteration returns a scalar which should be minimized
"""
# do simulation and calculate scalar
return result - 1.0 # doesn't work since result is becoming negative
def optimize(self):
"""
daemon which triggers input, reads output and optimizes results
:return: optimized results
"""
# initialize log, initial guess etc.
sol = minimize(self.objective, x0, method='SLSQP', options={'eps': 1e-3, 'ftol': 1e-9}, bounds=boundList)
The goal is to find a solution which can be adapted to any target value. The user should be able to enter a value and the minimization will return the best variable configuration for this target value.
As discussed in the comments, one way of achieving this is to use
return (result - 1.0) ** 2
in objective. Then the results cannot become negative and the optimization will try to find result in such a way that it is close to your target value (e.g. 1.0 in your case).
Illustration, using first your current set-up:
from scipy.optimize import minimize
def objective(x, target_value):
# replace this by your actual calculations
result = x - 9.0
return result - target_value
# add some bounds for all the parameters you have
bnds = [(-100, 100)]
# target_value is passed in args; feel free to add more
res = minimize(objective, (1), args=(1.0,), bounds=bnds)
if res.success:
# that's the optimal x
print(f"optimal x: {res.x[0]}")
else:
print("Sorry, the optimization was not successful. Try with another initial"
" guess or optimization method")
As we chose -100 as the lower bound for x and ask to minimize the objective, the optimal x is -100 (will be printed if you run the code from above). If we now replace the line
return result - target_value
by
return (result - target_value) ** 2
and leave the rest unchanged, the optimal x is 10 as expected.
Please note that I pass your target value as additional argument so that your function is slightly more flexible.

Pyomo: Minimize for Max Value in Vector

I am optimizing the behavior of battery storage combined with solar PV to generate the highest possible revenue stream.
I now want to add one more revenue stream: Peak Shaving (or Demand Charge Reduction)
My approach is as follows:
Next to the price per kWh, an industrial customer pays for the maximal amount of power (kW) he was drawing from the grid in one period (i=1:end), so called demand charges
This maximum amount is found in the vector P_Grid = P_GridLoad (energy self-consumed from the grid) + P_GridBatt (energy used to charge the battery)
There exists a price vector which tells the price per kW for all points in time
I now want to generate a vector P_GridMax that is zero for all points in time but the moment when the maximal value of P_Grid occurs (then it equals max(P_Grid).
Thus, the vector P_GridMax consists of zeros and one nonzero element (not more!)
In doing so, I can now multiply this vector with the price vector, sum up over all points in time and receive the billed demand charges
By including this vector into the objective of my model I can minimize these charges
Now, does anybody see a solution for how to formulate such a constraint (P_GridMax)? I already updated my objective function and defined P_Grid.
Any other approach would also be welcome.
This is the relevant part of my model, with P_xxx = power flow vectors, C_xxx = price vectors, ...
m.P_Grid = Var(m.i_TIME, within = NonNegativeReals)
m.P_GridMax = Var(m.i_TIME, within = NonNegativeReals)
# Minimize electricity bill
def Total_cost(m):
return ... + sum(m.P_GridMax[i] * m.C_PowerCosts[i] for i in m.i_TIME) - ...
m.Cost = Objective(rule=Total_cost)
## Peak Shaving constraints
def Grid_Def(m,i):
return m.P_Grid[i] = m.P_GridLoad[i] + m.P_GridBatt[i]
m.Bound_Grid = Constraint(m.i_TIME,rule=Grid_Def)
def Peak_Rule(m,i):
????
????
????
????
m.Bound_Peak = Constraint(m.i_TIME,rule=Peak_Rule)
Thank you very much in advance! Please be aware that I have very little experience with python/pyomo coding, I would really appreciate you giving extensive explanations :)
Best,
Mathias
Another idea is that you don't actually need to index your P_GridMax variable with time.
If you're dealing with demand costs they tend to be fixed over some period, or in your case it seems that they are fixed over the entire problem horizon (since you're only looking for one max value).
In that case you would just need to do:
m.P_GridMax = pyo.Var(domain=pyo.NonNegativeReals)
def Peak_Rule(m, i):
return m.P_GridMax >= m.P_Grid[i]
m.Bound_Peak = pyo.Constraint(m.i_TIME,rule=Peak_Rule)
if you're really set on multiplying your vectors element-wise you can also just create a new variable that represents that indexed product and apply the same principle to extract the max value.
Here is one way to do this:
introduce a binary helper variable ismax[i] for i in i_TIME. This variable is 1 if the maximum is obtained in period i and 0 otherwise. Then obviously you have a constraint sum(ismax[i] for i in i_TIME) == 1: the maximum must be attained in exactly one period.
Now you need two additional constraints:
if ismax[i] == 0 then P_GridMax[i] == 0.
if ismax[i] == 1 then for all j in i_TIME we must have P_GridMax[i] >= P_GridMax[j].
The best way to formulate this would be to use indicator constraints but I don't know Pyomo so I don't know whether it supports that (I suppose it does but I don't know how to write them). So I'll give instead a big-M formulation.
For this formulation you need to define a constant M so that P_Grid[i] can not exceed that value for any i. With that the first constraint becomes
P_GridMax[i] <= M * ismax[i]
That constraint forces P_GridMax[i] to 0 unless ismax[i] == 1. For ismax[i] == 1 it is redundant.
The second constraint would be for all j in i_TIME
P_GridMax[i] + M * (1 - ismax[i]) >= P_Grid[j]
If ismax[i] == 0 then the left-hand side of this constraint is at least M, so by the definition of M it will be satisfied no matter what the value of P_GridMax[i] is (the first constraint forces P_Grid[i] == 0 in that case). For ismax[i] == 1 the left-hand side of the constraint becomes just P_GridMax[i], exactly what we want.

scipy.optimize.minimize with matrix constraints

I am new to scipy.optimize module. I am using its minimize function trying to find a x to minimize a multivariate function, which takes matrix input but return a scalar value. I have one equality constraint and one inequality constraint, both taking vector input and return vector values. Particularly, here is the list of constraints:
sum(x) = 1 ;
AST + np.log2(x) >= 0
where AST is just a parameter. I defined my constraint functions as below:
For equality constraint: lambda x: sum(x) - 1
For inequality constraint:
def asset_cons(x):
#global AST
if np.logical_and.reduce( (AST + np.log2(x)) >= 0):
return 0.01
else:
return -1
Then I call
cons = ({'type':'eq', 'fun': lambda x: sum(x) - 1},
{'type':'ineq', 'fun': asset_cons})
res = optimize.minize(test_obj, [0.2, 0.8], constraints = cons)
But I still got error complaining my constraint function. Is it allowed to return vector value for constraint function or I have to return a scalar in order to use this minimize function?
Could anyone help me to see if the way I specify the constraints has any problems?
In principle this does not look that wrong at all. However, it is a bit difficult to say without seeing something about test_obj and the actual error. Does it throw an exception (which hints at a programming error) or complain about convergence (which hints at a mathematical challenge)?
You have the basic idea right; you need to have a function accepting an input vector with N elements and returning the value to be minimized. Your boundary conditions should also accept the same input vector and return a single scalar as their output.
To my eye there is something wrong with your boundary conditions. The first one (sum(x) - 1) is fine, but the second one is mathematically challenging, as you have defined it as a stepwise function. Many optimization algorithms want to have continuous functions with preferable quite smooth behaviour. (I do not know if the algorithms used by this function handle stepwise functions well, so this is just a guess.
If the above holds true, you might make things easier by, for example:
np.amin(AST + np.log2(x))
The function will be non-negative if all AST + log2(x[n]) >= 0. (It is still not extremely smooth, but if that is a problem it is easy to improve.) And now it'll also fit into one lambda.
If you have difficulties in convergence, you should probably try both COBYLA and SLSQP, unless you already know that one of them is better for your problem.

Categories