We have a complex optimization problem which includes several quadratic terms with integer and continous variables (using Anaconda Python / Pycharm with Gurobi 6.0.2). We applied the setPWLObj function to apprixmate the quadratic objective components. The code for this is as follows:
m.addConstr(l1[t] == 1/2.0 * (hsqrt[t]+hQ[t]))
m.addConstr(l2[t] == 1/2.0 * (hsqrt[t]-hQ[t]))
hlx1 = linspace(-10, 10, 50)
hlx2 = linspace(-10, 10, 50)
h1y1 = [0]*50
hly2 = [0]*50
for i in range(len(hlx1)):
h1y1[i] = hlx1[i] * hlx1[i] * 7.348 / 1000.0
hly2[i] = -hlx2[i] * hlx2[i] * 7.348 / 1000.0
m.setPWLObj(l1[t], hlx1, h1y1)
m.setPWLObj(l2[t], hlx2, hly2)
With l1 and l2 being continous variables.
The problem behaves inconsistently. Running it on a Mac mostly delivers the exit codes 138 and 139 (correspondent to Bus Error 10), sometimes the same problem a solution can be calculated. This is particularly the case when starting the optimization several times in a row. This appears to be random.
On Windows machines either Python crashes, or the exit code "-1073741819" is delivered. Further searches for this exit code did not really help us.
Sorry for taking so long, but we fixed the issue.
Actually we found out that the python crash is or was due to a bug in Gurobi. Following a request we filed with them the bug was removed.
If Gurobi 6.0.3. or above is used the error is not there anymore.
Related
I have developed a nurse scheduling program for one of the departments of the hospital I work with in Python. The program makes use of OR-Tools and is based on the following example: https://github.com/google/or-tools/blob/master/examples/python/shift_scheduling_sat.py
In order to constraint the number of shifts employees can work in a week/month I make use of constraints of the following form:
model.Add(min_hour <= sum(work[k, s, d] for s in range(1, 4) for d in range(i, j)) <= max_hour)
Here (i,j) denotes beginning/end of a week or month.
The program worked fine for a couple of months, up until about 2 weeks ago. Then I started getting the errors on constraints of this type. Specifically I'm getting the following message:
NotImplementedError: Evaluating a BoundedLinearExpr as a Boolean value is not supported.
Because of run-time issues I usually run the code on a Google Cloud VM, so this is where I run into trouble. However, when I run the code on my local machine, that likely has a different version of OR-Tools, I don't get any errors at all.
I haven't been able to find anything in the documentation about this problem. Therefore I'm wondering how to address this issue? Is it something that needs to be fixed in the package or do I need to rewrite my code. If so, what changes would I need to make, the example code seems to have remained unchanged?
The Python wrapper was updated to catch more user errors.
In ortools==8.2.8710 this prints OPTIMAL:
from ortools.sat.python import cp_model
model = cp_model.CpModel()
a = model.NewIntVar(0, 1, "")
model.Add(2 <= a <= 3) # doesn't do anything
solver = cp_model.CpSolver()
solver.Solve(model)
print(solver.StatusName())
while in newer versions it raises an error.
You have to split your constraint into 2 model.Add. (or remove the constraint to get the same wrong behaviour)
Edit: in your case
hours = sum(work[k, s, d] for s in range(1, 4) for d in range(i, j))
model.Add(hours >= min_hour)
model.Add(hours <= max_hour)
# or following Laurent's advice
model.AddLinearExpressionInDomain(hours, cp_model.Domain(min_hour, max_hour))
I was able to run the code succesfully, without encountering the BoundedLinearExpr error, after rewriting it, and implementing the constraints the way #Stradivari suggested:
hours = sum(work[k, s, d] for s in range(1, 4) for d in range(i, j))
model.Add(hours >= min_hour)
model.Add(hours <= max_hour)
You can also get this error when using
model.add(yourBoolVar)
instead of
model.add(yourBoolVar == 1)
I have the following code in a function:
phi = fourier_matrix(y, fs)
N = np.size(phi,axis=1)
x = np.ones(N)
for i in range(136):
W_pk = np.diag(x)
temp = pinv(np.dot(phi, W_pk))
q_k = np.dot(temp, y)
x = np.dot(W_pk, q_k)
where phi is (96,750), W_pk is (750,750) and q_k is (750,).
This throws the following error:
Python(12001,0x7fff99d8b380) malloc: * error for object 0x7fc71aa37200: incorrect checksum for freed object - object was probably modified after being freed.
* set a breakpoint in malloc_error_break to debug
If I comment the last dot product the error does not appear.
I think I need to free memory in some way or maybe do the dot product in a different way?
Also, this only happens when I run it from a mac. On windows or linux it does not throw the error.
Python is 3.6 (tried with 3.7), and numpy is 1.14.5, also tried with 1.15
Any help would be greatly appreciated, since I really need to make this work!
Thanks in advance.
EDIT I:
I tried this portion of the code on a jupyter notebook, and it didn't fail. This confused me even more! It fails when I run it in Visual Studio Code on a mac. The rest of my code, an algorithm to remove artifacts from a signal, works as it should until I add that last piece of code x = np.dot(W_pk, q_k). Maybe it works on jupyter because I don't run the rest of the algorithm there? but as I said, it only crashes on that last dot product.
EDIT II: I added the piece of code above the for loop to this question, because I found that the problem is somehow related to how x is being used. You see it's declared above as a float64 ndarray. When it reaches the last line of the for loop, the dot product returns a complex128 (should be complex64, don't know what's happening there) and overwrites the x array. The first time works, but the second time it crashes when trying to overwrite. If I use a new variable for the dot product, say z, then it does not crash! not sure why... but I need to overwrite x in each iteration.
Furthermore, if I do something like this:
z = np.dot(W_pk, q_k)
x = abs(z) #I don't need complex numbers at this point
Then it crashes with the same error on the first dot product (presumably):
temp = pinv(np.dot(phi, W_pk))
Also, the memory consumption is not that bad, around 110M according some measurements, and the same algorithm does not crash on iPython with twice the memory usage. This is what I find the most obscure, why doesn't it crash on iPython??
I am working on an overly stiff, Michaelis-Menten-type system (already implemented and published in Matlab, solved easily with ode15s). I translated it to Python, but no solver can move on beyond step 2 in the integration.
I have tried:
#time
t_start = -10
t_end = 12
t_step = 0.01
# Number of time steps: 1 extra for initial condition
num_steps = np.floor((t_end - t_start)/t_step) + 1
[...]
#integration
r = integrate.ode(JahanModel_ODE).set_integrator('lsoda', atol=1e-8,rtol=1e-6)
and also for the integrator:
r = integrate.ode(JahanModel_ODE).set_integrator('vode',method='bdf', order=5)
with different tolerances.
All I get is
UserWarning: lsoda: Excess work done on this call (perhaps wrong Dfun
type). 'Unexpected istate=%s' % istate))
or
UserWarning: vode: Excess work done on this call. (Perhaps wrong MF.)
'Unexpected istate=%s' % istate))
I also tried different values for t_step.
There already seemed to be a satisfying answer here: Integrate stiff ODEs with Python, but the links are not working anymore and googling suggested that lsoda is already superior to LSODE.
EDIT: Here is the complete code, without the plotting instances. Gistlink
I'm getting this error when I try to compute the logistic function for a data mining method I'm implementing:
RuntimeWarning: overflow encountered in exp
My code:
def logistic_function(x):
# x = np.float64(x)
return 1.0 / (1.0 + np.exp(-x))
If I understood correctly from some related questions that the problem is that np.exp() is returning a huge value. I saw suggestions to let numpy ignore the warnings, but the problem is that when I get this error, then the results of my method are horrible. However when I don't get it, then they are as expected. So making numpy ignoring the warning is not a solution for me at all. I don't know what is wrong or how to deal with.
I don't even know if this is a result of a bug because sometimes I get this error and sometimes not! I went through my code many times and everything looks correct!
You should compute the logistic function using either scipy.special.expit, which in recent enough SciPy is more stable than your solution (although earlier versions got it wrong), or by reducing it to tanh:
def logistic_function(x):
return .5 * (1 + np.tanh(.5 * x))
This version of the function is stable, fast, and fairly accurate.
I recently reinstalled my python environment and a code that used to work very quickly now creeps at best (usually just hangs taking up more and more memory).
The point at which the code hangs is:
solve(exp(-alpha * x**2) - 0.01, alpha)
I've been able to reproduce this problem with a fresh IPython 0.13.1 session:
In [1]: from sympy import solve, Symbol, exp
In [2]: x = 14.7296138519
In [3]: alpha = Symbol('alpha', real=True)
In [4]: solve(exp(-alpha * x**2) - 0.01, alpha)
this works for integers but also quite slow. In the original code I looped over this looking for hundreds of different alpha's for different values of x (other than 14.7296138519) and it didn't take more than a second.
any thoughts?
The rational=False flag was introduced for such cases as this.
>>> q=14.7296138519
>>> solve(exp(-alpha * q**2) - 0.01, alpha, rational=False)
[0.0212257459123917]
(The explanation is given in the issue cited above.)
Rolling back from version 0.7.2 to 0.7.1 solved this problem.
easy_install sympy==0.7.1
I've reported this as a bug to sympy's google code.
I am currently running sympy ver: 1.11.1
This is for all I know the latest version.
However, hanging as was described, when solving a set of 3 differential for 3 angular double differentials persist.
Eg.:
sols = solve([LE1, LE2, LE3], (the_dd, phi_dd, psi_dd),
simplify=False, rational=False)