I am pretty new to the subject of linear programming and would appreciate any pointers.
I have a slightly complicated equation but here is a simpler version of the problem:
x1 + x2 = 10
#subject to the following constraints:
0 <= x1 <= 5 and
3x1 <= x2 <= 20
Basically x2 has to have a value that is greater than 3 times that of x1. So in this case the solutions are, x1 = [0,1,2] and correspondingly x2 = [10, 9, 8]
There is a lot of material out there for minimizing or maximizing an objective function but this is not one of them. What do you call solving such type of problems and also what is the recommended way to solve this preferably using some libraries from python that finds one single or multiple feasible solutions?
Your problem could be stated as
min 0*x1+0*x2 ("zero coefficients")
subject to
x1+x2=10
3x1-x2<=0
x2<=20 (note that this constraint follows from x1,x2>=0 and their sum being 10)
This can easily fed into a linear programming package such as pulp. I am more of a R user than a python user hence I can not provide details. You could solve it also online without any programming.
EDIT: rereading your question, I see that your desired solutions are not continuous (e.g. it seems you are not looking for [2.5, 7.5] as solution), but are restricted to integer values. The problem would then called a "mixed integer problem" instead of "linear problem". Pulp, however, should be able to solve it if you can declare the variables x1, x2 as integers.
Another point is, if you are after ALL integer solutions given the constraints. There has been some discussions about that here on stackoverflow, however I am unsure if pulp can do that out of the box.
Related
I want to implement boolean logic and dependent variables into a Mixed-Integer Linear Program with scipy.optimize.milp using a highs solver.
How do I set the actual matrices and vectors c, A_ub, b_ub, A_eq, b_eq to fit these exemplary Boolean operations of the exemplary MILP:
Boolean variables: a, b, c, d, e, f, g, h, i, j, k, l
Minimize 1a+1b+...+1l
such that:
a OR b
c AND d
e XOR f
g NAND h
i != j
k == l
a,b,...,l are set to integers via the integrality parameter:
integrality=np.repeat(3, 12+amount_of_helper_variables)
And the lower and upper bounds are set to match boolean values 1 or 0 only:
Forall x in {a,b,...,l}: 0 <= x <= 1
I figured this CS post might help a lot as a general building guide, especially for solvers taking arbitrary formula input formats, but didn't get far myself with the conversion to standard matrix form until now.
I'm asking for a generalizable approach for conversion that basically can be used as a helper method for array creation and doesn't just apply to the stated problem but all boolean formula conversions for standard matrix form MILP using np.arrays to juggle the variables and helpers around.
Disclaimer
Generalization is fine, but sometimes we lose exploitable substructures in mathematical-optimization. Sometimes this is bad!
Recommendation
That being said, i recommend the following.
Intermediate language: Conjunctive normal form
It's well known, that we can express any boolean function with it
It's the form a SAT-solver would expect: DIMACS CNF -> some empirical proof that it's a good pick
There is lots of well-understood tooling
There is a natural MILP-formulation
Transformation: CNF -> MILP
Helper-function
Input: CNF defined on boolean variables (integral and bounded by [0, 1])
Output:
Set of constraints aka rows in constraint matrix A_ub
Set of constants aka scalars in b_ub
No matter what kind of input you have:
You might go through one joint CNF or decompose into many CNFs. And by definition you can concatenate them and their "conjunction." Meaning: A_ub and b_ub are stacking those outputs.
The transformation is simple:
for each c in cnf:
for each disjunction in c:
add constraint:
---------------
sum of positive literals - sum of negative literals >= 1 - |negative literals|
Wiki: Literal:
A positive literal is just an atom (e.g. x).
A negative literal is the negation of an atom (e.g. not x).
Example for a given clause = disjunction in some cnf:
x1 or x2 or !x3
->
x1 + x2 + (1-x3) >= 1 easier to understand
<->
x1 + x2 - x3 >= 1 - 1 as proposed above
<->
x1 + x2 - x3 >= 0
(i left one step open -> we need to multiply our constraints with -1 to follow scipys standard-form; but well... you get the idea)
Tooling
CNF
SymPy has a boolean algebra module which could help (e.g. transform to cnf)
pyeda can achieve similar things (and is actually more targeting use-cases like that)
Remarks
There is tons of other potentiall relevant stuff, especially around CNF-creation.
These things are often important in the real-world, e.g. Tseitin-transformation (for cases where a native cnf-creation would result in exponential-size). pyeda also knows about tseitin if i remember correctly.
But well... it's just a Stack-Overflow answer ;-)
References
If you need some reading material, i recommend:
Hooker, John N. Integrated methods for optimization. Vol. 170. New York: Springer, 2012.
I would approach this in two steps:
Write things down equation based
Convert (painfully) into matrix format
So we have:
x OR y. I.e. x=1 OR y=1. That is x+y>=1.
x AND y. I.e. x=1 AND y=1. That means just fixing both variables to 1.
x XOR y. I.e. x=1 XOR y=1. That is x+y=1.
x NAND y. I.e. not (x=1 AND y=1). So x+y<=1.
x <> y. This different notation for x XOR y. We handled that already.
x=y.This equation is ready as is. Maybe write as x-y=0.
Step 2, can usually be done in block format using a (large) piece of paper. Each column is a variable (or block of variables) and each row is a constraint. Here all matrix entries (coefficients) are 0, -1 or 1. E.g. x-y=0 means: create a row with a coefficient of 1 in the x column and a -1 in the y column. See: How to implement Linear Programming problem in scipy with complex objective for an example. It is often better to automate this and let a program do this for you. Python tools that do this for you are e.g. PuLP and Pyomo.
This may seem like a bit of a funny question, but is there a way to program a LP equation with two 'lower' bounds?
Basically my problem is, rather than having conventional bounds (0,x) for some variable 'a', i want to have bounds ((0 or i),x) where i and x is a range of floats. So if zeroing it out doesn't optimize it, it finds the optimal value between i and x; e.g. (0,5,100) where optimal value can either be zero or a float somewhere between 5 and 100.
Is there a way of programming this in scipy linprog or PuLP? or is there a more sophisticated solver that can handle such constraints?
The exact scenario you describe is not possible using only LP (so you wouldn't be able to solve this with linprog), but you can do something like this with MILP. You would introduce a binary variable, say b, which would be 0 if the lower and upper bound is 0 and 1 if you have the other bound, then you would add constraints b*i <= a and a <= b*x. This way when b is zero, a must be zero and when b is 1, you recover your bound of i <= a <= x. You would be able to solve this with Pulp.
With no luck I have been trying to solve a problem within a personal project for a few weeks now. Recently I have received help from the math stack exchange in the form of the answer to this question: https://math.stackexchange.com/a/4089367/907708 and I am now trying to translate the equations in the answer into python code and I keep getting stuck.
I will restate the question here for ease of access (I tried to post the answer but the formatting was all wacky so you'll have to refer to the link to the above answer to find it. oops)
Question:
Situation: I am trying to minimize the standard deviation between a series of points of differing heights in a list with the constraint that each point in the list can be raised anywhere from 0 to 2 units in order to minimize the standard deviation between points.
Example: I have a list of points which are equidistant on the x axis but not the y axis.
h = [20, 24, 28, 24 ,20 ,18, 20, 32 ,30, 28, 20 ,24]
Where each number in the list represents that point's height.
I also have the constraint that each point in the list can be raised by a c value anywhere from 0 to 2 in order to help achieve a smaller standard deviation.
I am trying to create an algorithm that does the optimization of minimizing the standard deviation of the points of h with the constraint that each point in h can be raised by 0 <= c <= n for an h of any length with any values and with any n > 0
I am very new to optimization problems and although I have seen problems that look similar to my question, I have not seen any that I've been able to gather enough information to help push me further towards an answer.
If possible, I was hoping someone would help me define the objective function, constraints, and other necessary functions that would lead me to an answer.
This is not a homework problem so therefore I have no course material to help guide me to an answer. The only guidance I have is from the comments and answers to this post. Please understand that I am in no way a mathematician so I really need all the help I can get. Thanks!
End Question:
I have looked into countless scipy.optimize examples to try to figure out how to format my code to make this work but I have been unsuccessful so far. I was hoping someone with more knowledge on the subject could help me translate these formulas or tell me what libraries or material I should look into in order to help answer this question. (I would post the collection of code snippets I have tried so far but none of them got me anywhere of value any I don't think they would provide any valuable insights in order to answer the question so I purposefully omitted them)
Any feedback is greatly appreciated and I will be sure to respond quickly to any questions or comments you have. Thank you so much!
You can install PyGad. Genetic algorithms works great for this kind of optimization problems. Also, it's much easier to implement, imho. Just do pip install pygad. Below is the code to solve your problem. I'm using the default config of PyGad. The fitness function evaluates how good any candidate solution is. Since PyGad tries to maximize (and we want to minimize), we return 1/evaluation in the fitness function. We can set constraints by just returning -100 (bad fitness) for any invalid candidate solution. I plugged the formula given on the math stack solution and pre-compute x_bar, even though it should take less than 5 seconds. The problem specific configuration is: the fitness_func, num_genes and a initial guess for ci values (init_range_low/high).
import pygad
import numpy as np
X = [20, 24, 28, 24 ,20 ,18, 20, 32 ,30, 28, 20 ,24]
X_BAR = np.array(X).sum()/len(X)
def fitness_function(solution, solution_idx):
c_bar = np.array(solution).sum()/len(solution)
accum = 0
for i, ci in enumerate(solution):
if ci < 0 or ci > 2:
return -100
accum += (X[i] + ci - (X_BAR + c_bar))**2
fitness = 1/accum
return fitness
ga_instance = pygad.GA(num_generations=100,
num_parents_mating=7,
fitness_func=fitness_function,
sol_per_pop=50,
num_genes=len(X),
init_range_low=0,
init_range_high=2,
parent_selection_type="sss",
keep_parents=7,
crossover_type="single_point",
mutation_type="random",
mutation_percent_genes=10)
ga_instance.run()
ga_instance.plot_result()
sol, sol_fitness, sol_idx = ga_instance.best_solution()
print("Parameters of the best solution : {solution}".format(solution=sol))
Here is how the fitness evolves:
Finally, the solution given by the algorithm is:
[1.99574728e+00 1.00786156e+00 2.17545152e-02 1.16404525e+00
1.98465204e+00 1.98997128e+00 1.98167328e+00 1.32911147e-02
1.17406735e-03 1.30281600e-04 1.99916130e+00 1.17383310e+00]
Another solution using scipy.optimize.minimize:
from scipy.optimize import minimize
x = np.array([20, 24, 28, 24 ,20 ,18, 20, 32 ,30, 28, 20 ,24])
# define the objective to minimize
# (here c is the variable and x is a additional argument)
def obj(c, x): return np.sum((x+c -(np.mean(x) + np.mean(c)))**2)
# Variable bounds: 0 <= ci <= 2
bounds = [(0, 2) for _ in range(len(x))]
# initial guess for the solver
c0 = np.ones_like(x)
# call the solver and pass a function that only depends on the variable c
res = minimize(lambda c: obj(c, x), x0=c0, bounds=bounds)
# your solution
print(res.x)
gives
array([2. , 1.11111256, 0. , 1.11111256, 2. ,
2. , 2. , 0. , 0. , 0. ,
2. , 1.11111446])
I'm trying to write a program that will allow me to solve a system of equations using numpy, however, I want the solution to be non-trivial (not all zeros). Obviously the program is just going to set everything to 0, and boom, problem solved. I attempted to use a while loop (like below), but quickly found out it's going to continue to spit 0 back at me. I don't care if I end up using numpy, I'm open to other solutions if it's more elegant.
I actually haven't solved this particular set by hand, maybe the trivial solution is the only solution. If so, the principle still applies. Numpy seems to always spit 0 back.
Any help would be appreciated! Thanks.
x1 = .5
x2 = .3
x3 = .2
x4 = .05
a = np.array([[x1,x2],[x3,x4]])
b = np.array([0,0])
ans = np.linalg.solve(a,b)
while ans[0] == 0 and ans[1] == 0:
print ("got here")
ans = np.linalg.solve(a,b)
print(ans)
In your case, the matrix a is invertible. Therefore your system of linear equations has only one solution and the solution is [0, 0]. Are you wondering why you only get that unique solution?
Check out Sympy and it's use of solve and matrix calculations. Here are the pages for both.
http://docs.sympy.org/latest/tutorial/matrices.html
http://docs.sympy.org/latest/tutorial/solvers.html
Using excel solver, it is easy to find a solution (optimum value for x and y )for this equation:
(x*14.80461) + (y * -4.9233) + (10*0.4803) ≈ 0
However, I can't figure out how to do this in Python. The existing scipy optimize library function like fsolve() or leastsq() seems to work with only one variable.... (I might just not know how to use them)...
Any suggestions?
Thanks!
>>> def f(x):
... return x[0]*14.80461 + x[1]*(-4.9233) + x[2]*(10*0.4803)
>>> def vf(x):
... return [f(x), 0, 0]
>> xx = fsolve(vf, x0=[0,0,1])
>>>
>>> f(xx)
8.8817841970012523e-16
Since the solution is not unique, different initial values for an unknown lead to different (valid) solutions.
EDIT: Why this works. Well, it's a dirty hack. It's just that fsolve and its relatives deal with systems of equations. What I did here, I defined a system of three equations (f(x) returns a three-element list) for three variables (x has three elements). Now fsolve uses a Newton-type algorithm to converge to a solution.
Clearly, the system is underdefined: you can specify arbitrary values of two variables, say, x[1] and x[2] and find x[0] to satisfy the only non-trivial equation you have. You can see this explicitly by specifying a couple of initial guesses for x0 and see different outputs, all of which satisfy f(x)=0 up to a certain tolerance.