Translating objective function and constraints into code - python

With no luck I have been trying to solve a problem within a personal project for a few weeks now. Recently I have received help from the math stack exchange in the form of the answer to this question: https://math.stackexchange.com/a/4089367/907708 and I am now trying to translate the equations in the answer into python code and I keep getting stuck.
I will restate the question here for ease of access (I tried to post the answer but the formatting was all wacky so you'll have to refer to the link to the above answer to find it. oops)
Question:
Situation: I am trying to minimize the standard deviation between a series of points of differing heights in a list with the constraint that each point in the list can be raised anywhere from 0 to 2 units in order to minimize the standard deviation between points.
Example: I have a list of points which are equidistant on the x axis but not the y axis.
h = [20, 24, 28, 24 ,20 ,18, 20, 32 ,30, 28, 20 ,24]
Where each number in the list represents that point's height.
I also have the constraint that each point in the list can be raised by a c value anywhere from 0 to 2 in order to help achieve a smaller standard deviation.
I am trying to create an algorithm that does the optimization of minimizing the standard deviation of the points of h with the constraint that each point in h can be raised by 0 <= c <= n for an h of any length with any values and with any n > 0
I am very new to optimization problems and although I have seen problems that look similar to my question, I have not seen any that I've been able to gather enough information to help push me further towards an answer.
If possible, I was hoping someone would help me define the objective function, constraints, and other necessary functions that would lead me to an answer.
This is not a homework problem so therefore I have no course material to help guide me to an answer. The only guidance I have is from the comments and answers to this post. Please understand that I am in no way a mathematician so I really need all the help I can get. Thanks!
End Question:
I have looked into countless scipy.optimize examples to try to figure out how to format my code to make this work but I have been unsuccessful so far. I was hoping someone with more knowledge on the subject could help me translate these formulas or tell me what libraries or material I should look into in order to help answer this question. (I would post the collection of code snippets I have tried so far but none of them got me anywhere of value any I don't think they would provide any valuable insights in order to answer the question so I purposefully omitted them)
Any feedback is greatly appreciated and I will be sure to respond quickly to any questions or comments you have. Thank you so much!

You can install PyGad. Genetic algorithms works great for this kind of optimization problems. Also, it's much easier to implement, imho. Just do pip install pygad. Below is the code to solve your problem. I'm using the default config of PyGad. The fitness function evaluates how good any candidate solution is. Since PyGad tries to maximize (and we want to minimize), we return 1/evaluation in the fitness function. We can set constraints by just returning -100 (bad fitness) for any invalid candidate solution. I plugged the formula given on the math stack solution and pre-compute x_bar, even though it should take less than 5 seconds. The problem specific configuration is: the fitness_func, num_genes and a initial guess for ci values (init_range_low/high).
import pygad
import numpy as np
X = [20, 24, 28, 24 ,20 ,18, 20, 32 ,30, 28, 20 ,24]
X_BAR = np.array(X).sum()/len(X)
def fitness_function(solution, solution_idx):
c_bar = np.array(solution).sum()/len(solution)
accum = 0
for i, ci in enumerate(solution):
if ci < 0 or ci > 2:
return -100
accum += (X[i] + ci - (X_BAR + c_bar))**2
fitness = 1/accum
return fitness
ga_instance = pygad.GA(num_generations=100,
num_parents_mating=7,
fitness_func=fitness_function,
sol_per_pop=50,
num_genes=len(X),
init_range_low=0,
init_range_high=2,
parent_selection_type="sss",
keep_parents=7,
crossover_type="single_point",
mutation_type="random",
mutation_percent_genes=10)
ga_instance.run()
ga_instance.plot_result()
sol, sol_fitness, sol_idx = ga_instance.best_solution()
print("Parameters of the best solution : {solution}".format(solution=sol))
Here is how the fitness evolves:
Finally, the solution given by the algorithm is:
[1.99574728e+00 1.00786156e+00 2.17545152e-02 1.16404525e+00
1.98465204e+00 1.98997128e+00 1.98167328e+00 1.32911147e-02
1.17406735e-03 1.30281600e-04 1.99916130e+00 1.17383310e+00]

Another solution using scipy.optimize.minimize:
from scipy.optimize import minimize
x = np.array([20, 24, 28, 24 ,20 ,18, 20, 32 ,30, 28, 20 ,24])
# define the objective to minimize
# (here c is the variable and x is a additional argument)
def obj(c, x): return np.sum((x+c -(np.mean(x) + np.mean(c)))**2)
# Variable bounds: 0 <= ci <= 2
bounds = [(0, 2) for _ in range(len(x))]
# initial guess for the solver
c0 = np.ones_like(x)
# call the solver and pass a function that only depends on the variable c
res = minimize(lambda c: obj(c, x), x0=c0, bounds=bounds)
# your solution
print(res.x)
gives
array([2. , 1.11111256, 0. , 1.11111256, 2. ,
2. , 2. , 0. , 0. , 0. ,
2. , 1.11111446])

Related

Is there a programmable method for calculating the exponent value of a power sum

Say I have an equation:
a^x + b^x + c^x = n
Since I know a, b, c and n, is there a way to solve for x?
I have been struggling with this problem for a while now, and I can't seem to find a solution online.
My current method is to iterate over X until the left side is "close enough" to n. The method is pretty slow and in an already computationally difficult algorithm.
Example:
3^x + 5^x + 7^x = 83
How do i go about solving for x. (2 in this case)
I tried the equation in WolframAlpha and it seems to know how to solve it, but any other program fails to do so.
I probably should also mention that X is not an integer (mostly in 0.01 to 0.05 range in my case).
You can use scipy library. You can install it using command pip install scipy
Then, this code will work:
from scipy.optimize import root
def eqn(x):
return 3**x + 5**x + 7**x - 83
myroot = root(eqn, 2)
print(myroot.x)
Here, root takes two arguments root(fun, x0) where fun is the function of the equation and x0 is an rough estimate of the root value. For example if you know that your root will fall in range of (0,1) then you can enter 0 as rough estimate.
Also make sure the equation entered in the code is such that R.H.S. is equal to 0.
In our case 3^x + 5^x + 7^x = 83 becomes 3^x + 5^x + 7^x - 83 = 0
Reference Documentation
If you want to stick to base Python, it is easy enough to implement Newton's method for this problem:
from math import log
def solve(a,b,c,n,guess,tol = 1e-12):
x = guess
for i in range(100):
x_new = x - (a**x + b**x + c**x - n)/(log(a)*a**x + log(b)*b**x + log(c)*c**x)
if abs(x-x_new) < tol: return x_new
x = x_new
return "Doesn't converge on a root"
Newton's method might fail to converge in some pathological cases, hence an escape valve for such cases. In practice it converges very rapidly.
For example:
>>> solve(3,5,7,83,1)
2.0
Despite all this, I think that Cute Panda's answer is superior. It is easy enough to do a straight-forward implementation of such numerical algorithms, one that works adequately in most cases, but naive implementations such as the one give above tend to be vulnerable to excessive round-off error as well as other problems. scipy uses highly optimized routines which are implemented in a much more robust way.

Solve Linear Equation with constraints

I am pretty new to the subject of linear programming and would appreciate any pointers.
I have a slightly complicated equation but here is a simpler version of the problem:
x1 + x2 = 10
#subject to the following constraints:
0 <= x1 <= 5 and
3x1 <= x2 <= 20
Basically x2 has to have a value that is greater than 3 times that of x1. So in this case the solutions are, x1 = [0,1,2] and correspondingly x2 = [10, 9, 8]
There is a lot of material out there for minimizing or maximizing an objective function but this is not one of them. What do you call solving such type of problems and also what is the recommended way to solve this preferably using some libraries from python that finds one single or multiple feasible solutions?
Your problem could be stated as
min 0*x1+0*x2 ("zero coefficients")
subject to
x1+x2=10
3x1-x2<=0
x2<=20 (note that this constraint follows from x1,x2>=0 and their sum being 10)
This can easily fed into a linear programming package such as pulp. I am more of a R user than a python user hence I can not provide details. You could solve it also online without any programming.
EDIT: rereading your question, I see that your desired solutions are not continuous (e.g. it seems you are not looking for [2.5, 7.5] as solution), but are restricted to integer values. The problem would then called a "mixed integer problem" instead of "linear problem". Pulp, however, should be able to solve it if you can declare the variables x1, x2 as integers.
Another point is, if you are after ALL integer solutions given the constraints. There has been some discussions about that here on stackoverflow, however I am unsure if pulp can do that out of the box.

Getting the non-trivial solution to a set of linear equations

I'm trying to write a program that will allow me to solve a system of equations using numpy, however, I want the solution to be non-trivial (not all zeros). Obviously the program is just going to set everything to 0, and boom, problem solved. I attempted to use a while loop (like below), but quickly found out it's going to continue to spit 0 back at me. I don't care if I end up using numpy, I'm open to other solutions if it's more elegant.
I actually haven't solved this particular set by hand, maybe the trivial solution is the only solution. If so, the principle still applies. Numpy seems to always spit 0 back.
Any help would be appreciated! Thanks.
x1 = .5
x2 = .3
x3 = .2
x4 = .05
a = np.array([[x1,x2],[x3,x4]])
b = np.array([0,0])
ans = np.linalg.solve(a,b)
while ans[0] == 0 and ans[1] == 0:
print ("got here")
ans = np.linalg.solve(a,b)
print(ans)
In your case, the matrix a is invertible. Therefore your system of linear equations has only one solution and the solution is [0, 0]. Are you wondering why you only get that unique solution?
Check out Sympy and it's use of solve and matrix calculations. Here are the pages for both.
http://docs.sympy.org/latest/tutorial/matrices.html
http://docs.sympy.org/latest/tutorial/solvers.html

Can I make a Min Z = max(a,b,c) in PuLP

I was wondering if i could make a multiple objective function in PuLP, by doing this Can I make a Min Z = max(a,b,c) in PuLP, however when using this code
ilp_prob = pulp.LpProblem("Miniimize Problem", pulp.LpMinimize)
x = []
if m >3:
return 1,1
for i in range(m):
temp = []
for j in range(len(jobs)):
temp += [pulp.LpVariable("x_%s_%s" %((i+1),(j+1)),0,1, cat = 'Binary')]
x+= [temp]
ilp_prob += max([pulp.lpSum([jobs[j]*x[i][j] for j in range(len(jobs))] for i in range(m))])
for i in range(len(jobs)):
ilp_prob += pulp.lpSum([x[j][i] for j in range(m)])==1
ilp_prob.solve()
It just returns all 1 in x[0], and all 0 in x[0].
I'm pretty sure you can't just use python's (!) max on pulp's internal expressions! Those solvers are working on a very specific problem-specification, LP standard form, where is no concept for that!
The exception would be if pulp would overload this max-function for it's data-structures (don't know if that's possible at all in python), but i'm pretty sure pulp does not support re-formulations like that (there is some needed; as again: the target is the Standard-form).
cvxpy for example does not overload, but introduces customized max-functions, which internally transform your problem.
That being said: i'm surprised your code runs without a critical error. But i'm too lazy to check pulps sources here.
Have a look at the usual LP/IP formulation-guides.
A first idea would be:
target: min (max(a,b,c))
reformulation:
introduce a new variable z
add constraints:
z >= a
z >= b
z >= c
assumption: the objective somehow want's to minimize z (maximizing will get you in trouble as the problem will get unbounded!)
this is the case here, as the final objective for our target would look like:
min(z)
Remark: One has to be careful that the problem will still be linear/convex (depending on the solver). In this case (our simple example; i did not check your whole model) i don't see a problem, but in more complex cases, min(max(complex_expression)) subjective to complex constraints, this might introduce non-convexity (and can't be solved by Conic solvers incl. LP-solvers).
And just throwing a keyword in the ring: your approach/objective sounds a bit like robust-optimization, where usually some worst-case scenario is optimized. Not all multi-objective optimization problems are treating multiple objective-components like that.

Python equation solver (max and min)

How can I resolve the equation like x * max(x,15) = 10 with python (maybe Sympy) libraries?
The max() means the maximum between given two arguments.
My equations has a more complicated form, but I want to resolve it in simplified form.
It looks like SymPy can solve this equation if you convert the Max to a Piecewise.
In [4]: solve(x*Piecewise((x, x >=15), (15, x < 15)) - 10, x)
Out[4]: [2/3]
When I plug your equation into sympy.solve, it gives NotImplementedError, meaning the algorithms to solve it are not implemented (I opened https://github.com/sympy/sympy/issues/10158 for this).
I think to solve equations like these, you would need to replace each Max or Min with its arguments and solve every iteration, and then remove the solutions where the Max or Min was not actually maximal or minimal in its argument.
I'll leave the full algorithm to you or some other answerer (or hopefully someone will implement it in SymPy). Some useful tips:
expr.atoms(Max, Min) will extract all instances of Max and Min from expr.
expr.subs(old, new) will return a new expression with old replaced with new in expr.
There is no answer to your equation. You are assigning x=3, so there is no variable to solve for.
x
3
Max(x, 15)
15
solve(x*Max(x, 15)-10, x) #No variable here
[]
Maybe, you meant to do this :
y*Max(x, 15) = 10
Then it becomes a valid question.
In [1]: solve(y*Max(x, 15)-10, y)
Out[1]: [2/3]

Categories