I am trying to solve an optimization problem in which one of the constraints is :
x*y=0, where x and y are decision variables and only x or y can be positive. In other words, if x!=0 then y=0 and if y!=0 then x=0.
Please help
Assumption: x and y are nonnegative
Infer upper-bounds UB_x and UB_y for x, y
Introduce new boolean variable b
Add constraints:
x <= (1-b) * UB_x
y <= b * UB_y
Related
I have a program where I want to minimize an absolute difference of two variables (an absolute error function). Say:
e_abs(x, y) = |Ax - By|; where e_abs(x, y) is my objective function that I want to minimize.
The function is subjected to the following constrains:
x and y are integers;
x >= 0; y >= 0
x + y = C, where C is an arbitrary constant (also C >= 0)
I am using the mip library (https://www.python-mip.com/), where I have defined both my objective function and my constrains.
The problem is that mip does not have an "abs" method. So I had to overcome that by spliting the main problem into two optimization sub-problems:
e(x, y) = Ax - By
Porblem 1: minimize e(x, y); subject to e(x, y) >= 0
Porblem 2: maximize e(x, y); subject to e(x, y) <= 0
After solving the two separate problems, compare the two results, yield the min(abs(e)).
That should have worked, but mip does not seem to understand that the error can be negative. As I show below:
constr(0): -1.0941176470588232 X(0, 0) +6.199999999999998 X(1, 0) - error = -0.0
constr(1): error <= -0.0
constr(2): X(0, 0) + X(1, 0) = 1.0
Obs.: consider X(0, 0) as x and X(1, 0) as y in our example
Again, the program results OptimizationStatus.INFEASIBLE, where clearly the combination X(0, 0) = 1 and X(1, 0) = 0 solves the problem.
Is it a formulation issue of my model? Or is it a bad behavior of the mip library?
You can (and should) reformulate. Because you are minimizing the absolute value of a function, you can introduce a dummy variable and 2 constraints on that variable and then minimize the dummy variable to keep it linear. (ABS is a non-linear function).
So, introduce z such that:
z >= Ax - By
and
z >= -(Ax - By)
then your objective is to minimize z
Suppose I have two real variables: X & Y and two binary variables x & y.
I want to add the following constraint pyomo:
when X>0 x--->1 else x-->0
when Y>0 y--->1 else y-->0
and x+y==1
My approach was
cons1:
x>=X
cons2:
y>=Y
cons3:
x+y==1
but the above doesn't seem to work and the values of x and y are random.
Your first two conditions require big M constraints. You can try something like
M_x * x >= X, M_y * y >= Y, and x + y == 1 where M_x and M_y are be constants that you set to values that doesn't unnecessarily bound X and Y. These constraints won't restrict the values of X and Y to 1 and will make x = 1 when X > 0 and y = 1 when Y > 0.
I have a function [ -4*x/sqrt(1 - (1 - 2*x^2)^2) + 2/sqrt(1 - x^2) ] that I need to evaluate at x=0. However, whenever you graph this function, for some interval of y there are many y-values at x=0. This leads me to think that the (subs) command can only return one y-value. Any help or elaboration on this? Thank you!
Here's my code if it might help:
x = symbols('x')
f = 2*asin(x) # f(x) function
g = acos(1-2*x**2) # g(x) function
eq = diff(f-g) # evaluating the derivative of f(x) - g(x)
eq.subs(x, 0) # substituting 0 for x in the derivative of f(x) - g(x)
After I run the code, it returns NaN, which I assume is because substituting in 0 for x returns not a single number, but a range of numbers.
Here is the graph of the function to be evaluated at x=0:
You should always give SymPy as many assumptions as possible. For example, it can't pull an x**2 out of a sqrt because it thinks x is complex.
A factorization an then a simplification solves the problem. SymPy can't do L'Hopital on eq = A + B since it does not know that both A and B converge. So you have to guide it a little by bringing the fractions together and then simplifying:
from sympy import *
x = symbols('x', real=True)
f = 2*asin(x) # f(x) function
g = acos(1-2*x**2) # g(x) function
eq = diff(f-g) # evaluating the derivative of f(x) - g(x)
eq = simplify(factor(eq))
print(eq)
print(limit(eq, x, 0, "+"))
print(limit(eq, x, 0, "-"))
Outputs:
(-2*x + 2*Abs(x))/(sqrt(1 - x**2)*Abs(x))
0
4
simplify, factor and expand do wonders.
I need to minimize L_1(x) subject to Mx = y.
x is a vector with dimension b, y is a vector with dimension a, and M is a matrix with dimensions (a,b).
After some reading I determined to use scipy.optimize.minimize:
import numpy as np
from scipy.optimize import minimize
def objective(x): #L_1 norm objective function
return np.linalg.norm(x,ord=1)
constraints = [] #list of all constraint functions
for i in range(a):
def con(x,y=y,i=i):
return np.matmul(M[i],x)-y[i]
constraints.append(con)
#make constraints into tuple of dicts as required by scipy
cons = tuple({'type':'eq','fun':c} for c in constraints)
#perform the minimization with sequential least squares programming
opt = minimize(objective,x0 = x0,
constraints=cons,method='SLSQP',options={'disp': True})
First,
what can I use for x0? x is unknown, and I need an x0 which satisfies the constraint M*x0 = y: How can I find an initial guess which satisfies the constraint? M is a matrix of independent Gaussian variables (~N(0,1)) if that helps.
Second,
Is there a problem with the way I've set this up? When I use the true x (which I happen to know in the development phase) for x0, I expect it to return x = x0 quickly. Instead, it returns a zero vector x = [0,0,0...,0]. This behavior is unexpected.
Edit:
Here is a solution using cvxpy** solving min(L_1(x)) subject to Mx=y:
import cvxpy as cvx
x = cvx.Variable(b) #b is dim x
objective = cvx.Minimize(cvx.norm(x,1)) #L_1 norm objective function
constraints = [M*x == y] #y is dim a and M is dim a by b
prob = cvx.Problem(objective,constraints)
result = prob.solve(verbose=False)
#then clean up and chop the 1e-12 vals out of the solution
x = np.array(x.value) #extract array from variable
x = np.array([a for b in x for a in b]) #unpack the extra brackets
x[np.abs(x)<1e-9]=0 #chop small numbers to 0
I'm using CVXPY in Python 3 to try to model the following linear program in X (N by T matrix). Let
R be an N by 1 matrix where each row is the sum of the entire row of values in X.
P be a 1 by N matrix defined in terms of X such as P_t = 1/(G-d-x_t).
I want to solve for an ideal such that:
minimize (X x P)
subject to:
The sum of reach row i in X has to be at least the value in R_i
Each value in X has to be at least 0
Any thoughts? I have the following code and not getting any luck:
from cvxpy import *
X = Variable(N,T)
P = np.random.randn(T, 1)
R = cumsum(X,axis=0) # using cumsum because
http://www.cvxpy.org/en/latest/tutorial/functions/index.html#vector-matrix-functions
objective = Minimize(sum_entries(square(X*P))) #think this is good
constraints = [0 <= X, cumsum(X,axis=0) >= R]
prob = Problem(objective, constraints)