I have an optimization problem with constraints, but the COBYLA solver doesn't seem to respect the constraints I specify.
My optimization problem:
cons = ({'type':'ineq', 'fun':lambda t: t},) # all variables must be positive
minimize(lambda t: -stateEst(dict(zip(self.edgeEvents.keys(),t)), (0.1,)*len(self.edgeEvents), constraints=cons, method='COBYLA')
and stateEst is defined as:
def stateEst(t):
val = 0
for edge,nextState in self.edgeEvents.iteritems():
val += edge_probability(self,edge,ts) * estimates[nextState]
val += node_probability(self, edge.head, ts, edge_list=[edge])* cost
for node,nextState in self.nodeEvents.iteritems():
val += node_probability(self, node, ts) * \
(estimates[nextState] + cost*len([e for e in node.incoming if e in self.compEdges])
return val
The probability functions are only defined for positive t values. The dictionary is necessary because the probabilities are calculated with respect to the 'named' t-values.
When I run this, I notice that COBYLA tries a value of -0.025 for one of the t-values. Why is the optimization not respecting the constraints?
COBYLA is technically speaking an infeasible method, which means, that the iterates might not be always feasible in regards to your constraints! (it's only about the final convergence, where feasibility matters for these algorithms).
Using an objective-function which is not defined everywhere will be problematic. Maybe you are forced to switch to some feasible method.
Alternatively you could think about generalizing your objective, so that there are penalties introduced for negative t's. But this is problem-dependent and could introduce other problems as well (convergence; numeric-stability).
Try using L-BFGS-B, which is limited to bound-constraints, which is not a problem here (for your current problem!).
For something this simple, just redefine your function to take any real value by taking np.exp(t) or even t**2, then take the log (or square root) of the solution.
Related
I'm trying to find an existing algorithm for the following problem:
i.e, let's say we have 3 variables, x, y, z (all must be integers).
I want to find values for all variables that MUST match some constraints, such as x+y<4, x<=50, z>x, etc.
In addition, there are extra POSSIBLE constraints, like y>=20, etc. (same as before).
The objective function (which i'm intrested in maximizing its value) is the number of EXTRA constraints that are met in the optimal solution (the "must" constraints + the fact that all values are integers, is a demand. without it, there's no valid solution).
If using OR-Tools, as the model is integral, I would recommend using CP-SAT, as it offers indicator constraints with a nice API.
The API would be:
b = model.NewBoolVar('indicator variable')
model.Add(x + 2 * y >= 5).OnlyEnforceIf(b)
...
model.Maximize(sum(indicator_variables))
To get maximal performance, I would recommend using parallelism.
solver = cp_model.CpSolver()
solver.parameters.log_search_progress = True
solver.parameters.num_search_workers = 8 # or more on a bigger computer
status = solver.Solve(model)
I am very new to Pyomo, working on a use case where my objective function coefficient is dynamic & needs a min-max function.
Objective function = Max( sum (P * UC) - sum ( P - min(P)) * UC
where P is variable needs to be optimized and UC is function which is derived value based on some calculation.
I have few doubts
how to use min or max function in objective function, I have tried np.min or calling function but it gives error since function has if else condition
I have tried multiple things but none seems to be working. If someone can help me with dummy code that will be great.
Thanks in Advance.
Min could be implemented by defining a new variable min_P, which needs to be smaller than any of the P, expressed by constraints:
min_P <= P[i] for all i
This will make sure, that min_P is not larger than the smallest of the P. Then you can just use min_P in your objective function. I assume you know how to define constraints like this. This might result in an unbound variable problem, depending on how exactly you are optimizing, but this should put you on the right track.
The max case is analogous, if you define another value for the expression sum (P * UC) - sum ( P - min(P)).
It is not clear whether UC is a parameter or a variable itself (calculated in another constraint). In the latter case, the whole problem will be highly nonlinear and should be reconsidered.
I do not understand your AbstractModel vs ConcreteModel question. If you have the data available, use a ConcreteModel. Apart from that, see here.
I have a hyperbolic function and i need to find the 0 of it. I have tried various classical methods (bisection, newton and so on).
Second derivatives are continuous but not accessible analytically, so i have to exclude methods using them.
For the purpose of my application Newton method is the only one providing sufficient speed but it's relatively unstable if I'm not close enough to the actual zero. Here is a simple screenshot:
The zero is somewhere around 0.05. and since the function diverges at 0, if i take a initial guess value greater then the minimum location of a certain extent, then i obviously have problems with the asymptote.
Is there a more stable method in this case that would eventually offer speeds comparable to Newton?
I also thought of transforming the function in an equivalent better function with the same zero and only then applying Newton but I don't really know which transformations I can do.
Any help would be appreciated.
Dekker's or Brent's method should be almost as fast as Newton. If you want something simple to implement yourself, the Illinois variant of the regula-falsi method is also reasonably fast. These are all bracketing methods, so should not leave the domain if the initial interval is inside the domain.
def illinois(f,a,b,tol=1e-8):
'''regula falsi resp. false postion method with
the Illinois anti-stalling variation'''
fa = f(a)
fb = f(b)
if abs(fa)<abs(fb): a,fa,b,fb = b,fb,a,fa
while(np.abs(b-a)>tol):
c = (a*fb-b*fa)/(fb-fa)
fc = f(c)
if fa*fc < 0:
fa *= 0.5
else:
a, fa = b, fb
b, fb = c, fc
return b, fb
How about using log(x) instead of x?
For your case, #sams-studio's answer might work, and I would try that first. In a similar situation - also in multi-variate context - I used Newton-homotopy methods.
Basically, you limit the Newton step until the absolute value of y is descending.
The cheapest way to implement is that you half the Newton step if y increases from the last step. After a few steps, you're back at Newton with full second order convergence.
Disclamer: If you can bound your solution (you know a maximal x), the answer from #Lutz Lehmann would also be my first choice.
I was wondering if i could make a multiple objective function in PuLP, by doing this Can I make a Min Z = max(a,b,c) in PuLP, however when using this code
ilp_prob = pulp.LpProblem("Miniimize Problem", pulp.LpMinimize)
x = []
if m >3:
return 1,1
for i in range(m):
temp = []
for j in range(len(jobs)):
temp += [pulp.LpVariable("x_%s_%s" %((i+1),(j+1)),0,1, cat = 'Binary')]
x+= [temp]
ilp_prob += max([pulp.lpSum([jobs[j]*x[i][j] for j in range(len(jobs))] for i in range(m))])
for i in range(len(jobs)):
ilp_prob += pulp.lpSum([x[j][i] for j in range(m)])==1
ilp_prob.solve()
It just returns all 1 in x[0], and all 0 in x[0].
I'm pretty sure you can't just use python's (!) max on pulp's internal expressions! Those solvers are working on a very specific problem-specification, LP standard form, where is no concept for that!
The exception would be if pulp would overload this max-function for it's data-structures (don't know if that's possible at all in python), but i'm pretty sure pulp does not support re-formulations like that (there is some needed; as again: the target is the Standard-form).
cvxpy for example does not overload, but introduces customized max-functions, which internally transform your problem.
That being said: i'm surprised your code runs without a critical error. But i'm too lazy to check pulps sources here.
Have a look at the usual LP/IP formulation-guides.
A first idea would be:
target: min (max(a,b,c))
reformulation:
introduce a new variable z
add constraints:
z >= a
z >= b
z >= c
assumption: the objective somehow want's to minimize z (maximizing will get you in trouble as the problem will get unbounded!)
this is the case here, as the final objective for our target would look like:
min(z)
Remark: One has to be careful that the problem will still be linear/convex (depending on the solver). In this case (our simple example; i did not check your whole model) i don't see a problem, but in more complex cases, min(max(complex_expression)) subjective to complex constraints, this might introduce non-convexity (and can't be solved by Conic solvers incl. LP-solvers).
And just throwing a keyword in the ring: your approach/objective sounds a bit like robust-optimization, where usually some worst-case scenario is optimized. Not all multi-objective optimization problems are treating multiple objective-components like that.
I am new to scipy.optimize module. I am using its minimize function trying to find a x to minimize a multivariate function, which takes matrix input but return a scalar value. I have one equality constraint and one inequality constraint, both taking vector input and return vector values. Particularly, here is the list of constraints:
sum(x) = 1 ;
AST + np.log2(x) >= 0
where AST is just a parameter. I defined my constraint functions as below:
For equality constraint: lambda x: sum(x) - 1
For inequality constraint:
def asset_cons(x):
#global AST
if np.logical_and.reduce( (AST + np.log2(x)) >= 0):
return 0.01
else:
return -1
Then I call
cons = ({'type':'eq', 'fun': lambda x: sum(x) - 1},
{'type':'ineq', 'fun': asset_cons})
res = optimize.minize(test_obj, [0.2, 0.8], constraints = cons)
But I still got error complaining my constraint function. Is it allowed to return vector value for constraint function or I have to return a scalar in order to use this minimize function?
Could anyone help me to see if the way I specify the constraints has any problems?
In principle this does not look that wrong at all. However, it is a bit difficult to say without seeing something about test_obj and the actual error. Does it throw an exception (which hints at a programming error) or complain about convergence (which hints at a mathematical challenge)?
You have the basic idea right; you need to have a function accepting an input vector with N elements and returning the value to be minimized. Your boundary conditions should also accept the same input vector and return a single scalar as their output.
To my eye there is something wrong with your boundary conditions. The first one (sum(x) - 1) is fine, but the second one is mathematically challenging, as you have defined it as a stepwise function. Many optimization algorithms want to have continuous functions with preferable quite smooth behaviour. (I do not know if the algorithms used by this function handle stepwise functions well, so this is just a guess.
If the above holds true, you might make things easier by, for example:
np.amin(AST + np.log2(x))
The function will be non-negative if all AST + log2(x[n]) >= 0. (It is still not extremely smooth, but if that is a problem it is easy to improve.) And now it'll also fit into one lambda.
If you have difficulties in convergence, you should probably try both COBYLA and SLSQP, unless you already know that one of them is better for your problem.