I am working on an overly stiff, Michaelis-Menten-type system (already implemented and published in Matlab, solved easily with ode15s). I translated it to Python, but no solver can move on beyond step 2 in the integration.
I have tried:
#time
t_start = -10
t_end = 12
t_step = 0.01
# Number of time steps: 1 extra for initial condition
num_steps = np.floor((t_end - t_start)/t_step) + 1
[...]
#integration
r = integrate.ode(JahanModel_ODE).set_integrator('lsoda', atol=1e-8,rtol=1e-6)
and also for the integrator:
r = integrate.ode(JahanModel_ODE).set_integrator('vode',method='bdf', order=5)
with different tolerances.
All I get is
UserWarning: lsoda: Excess work done on this call (perhaps wrong Dfun
type). 'Unexpected istate=%s' % istate))
or
UserWarning: vode: Excess work done on this call. (Perhaps wrong MF.)
'Unexpected istate=%s' % istate))
I also tried different values for t_step.
There already seemed to be a satisfying answer here: Integrate stiff ODEs with Python, but the links are not working anymore and googling suggested that lsoda is already superior to LSODE.
EDIT: Here is the complete code, without the plotting instances. Gistlink
Related
I currently have an (integer) LP problem solved which has, amongst others, the following mathematical constraint as pseudocode.
Packages_T1 + Packages_T2 + Packages_T3 + RPackages = 25
It represents three package trucks (T1, T2 and T3) to each of which packages can be assigned plus a residual/spilled package variable which is used in the objective function. The current value of 25 represents the total package demand.
Lets say I want to re-solve this problem but change the current demand of 25 packages to 35 packages. When I warm start from the previous solution with 25 packages CPLEX errors out in stating that the provided solution is infeasible: which makes perfect sense. However, it subsequently fails to repair the previous solution even though the most straight-forward way to do this would be to "up" the RPackages variable for each of these constraints.
My question is whether there is any possibility to still use the information from the previously solved problem as a warm start to the new one. Is there a way to, for example, drop all RPackages from the solution and have them recalculated to fit the new constraint right-hand side? A "last resort" effort I thought of would be to manually recalculate all these RPackages values myself and replace them in the old solution but a more automated solution to this problem would be preferred. I am using the standard CPLEX Python API for reference.
Thank you in advance.
even if the warmstart is not feasible CPLEX can use some information.
Let me use the zoo example from
https://www.linkedin.com/pulse/making-optimization-simple-python-alex-fleischer/
from docplex.mp.model import Model
mdl = Model(name='buses')
nbbus40 = mdl.integer_var(name='nbBus40')
nbbus30 = mdl.integer_var(name='nbBus30')
mdl.add_constraint(nbbus40*40 + nbbus30*30 >= 300, 'kids')
mdl.minimize(nbbus40*500 + nbbus30*400)
warmstart=mdl.new_solution()
warmstart.add_var_value(nbbus40,4)
warmstart.add_var_value(nbbus30,0)
mdl.add_mip_start(warmstart)
sol=mdl.solve(log_output=True)
for v in mdl.iter_integer_vars():
print(v," = ",v.solution_value)
gives
Warning: No solution found from 1 MIP starts.
Retaining values of one MIP start for possible repair.
I'm fairly new to this, so I'm just going to shoot and hope I'm as precise as possible and you'll think it warrants an answer.
I'm trying to optimize (minimize) a cost/quantity model, where both are continuous variables. Global cost should be minimized, but is dependent on total quantity, which is dependent on specific cost.
My code looks like this so far:
# create model
m = Model('Szenario1')
# create variables
X_WP = {}
X_IWP = {}
P_WP = {}
P_IWP = {}
for year in df1.index:
X_WP[year] = m.addVar(vtype=GRB.CONTINUOUS, name="Wärmepumpe%d" % year)
X_IWP[year] = m.addVar(vtype=GRB.CONTINUOUS, name="Industrielle Wärmepumpe%d" % year)
#Price in year i = Base.price * ((Sum of newly installed capacity + sum of historical capacity)^(math.log(LearningRate)/math.log(2)))
P_WP[year] = P_WP0 * (quicksum(X_WP[year] for year in df1.index) ** (learning_factor)
P_IWP[year] = m.addVar(vtype=GRB.CONTINUOUS, name="Preis Industrielle Wärmepumpe%d" % year)
X_WP[2016] = 0
X_IWP[2016] = 0
# Constraints and Objectives
for year in df1.index:
m.addConstr((X_WP[year]*VLST_WP+X_IWP[year]*VLST_IWP == Wärmemenge[year]), name="Demand(%d)" % year)
obj = quicksum(
((X_WP[year]-X_WP[year-1])*P_WP[year]+X_WP[year]*Strompreis_WP*VLST_WP)+
((X_IWP[year]-X_IWP[year-])*P_IWP[year]+X_IWP[year]*Strompreis_EHK*VLST_IWP)
for year in Wärmemenge.index)
m.setObjective(obj, GRB.MINIMIZE)
m.update()
m.optimize()
X is quantity and P is price. WP and IWP are two different technologies (more will be added later). Since X and P are multiplied the problem is nonlinear, now I haven't found a solution so far as to feed gurobi an objective, that it can handle.
My research online and on stackoverflow basically let me to the conclusion that I can either linearize and solve with gurobi, find another solver that can solve this MINLP or formulate my objective in a way that Gurobi can solve. Since I've already made myself familiar with Gurobi, that would be my prefered choice.
Any advice on what's best at this point?
Would be highly appreciated!
I'd suggest rewriting your Python code using Pyomo.
This is a general-purpose optimization modeling framework for Python which can construct valid inputs for Gurobi as well as a number of other optimization tools.
In particular, it will allow you to use Ipopt as a backend, which does solve (at least some) nonlinear problems. Even if Ipopt cannot solve your nonlinear problem, using Pyomo will allow you to test that quickly and then easily move back to a linearized representation in Gurobi if things don't work out.
I try to solve 2 coupled equations systems, called here system A and system B. One of these 2 systems are an ODE system.
To avoid to copy the shared data between the 2 systems, I would like have a structure with pointers. To that, I use the mechanism of numpy.view.
A bit of code :
import numpy as np
import scipy
t0,t1,dt = 0.0,5.0, 1.0
data = np.ones((5,2))
data[:,1]*=2
y=np.array([0.0,0.0]) ### no matter default value
r = scipy.integrate.ode(f)
r.set_integrator('dopri5', rtol=1e-3, atol=1e-6 )
r.set_f_params(0.05)
#r.set_initial_value(y, t0); r._y = data[2] ### Apparently equivalent
r.set_initial_value(data[2], t0) ### Apparently equivalent
print(np.shares_memory(r.y,y))
print(np.shares_memory(r.y,data))
Here, at the initial state, I have a synchronization between r.y (system A) and data[2] (the variable named data is the data of system B). If I modify one, the other is also modified and vice versa. Tape the command r.y.base confirm that r.y is just a view of the array named data. That the behavior that I desired.
Now, the problem start here. If I make progress my EDO system :
while r.successful() and r.t < t1:
r.integrate(r.t+dt, step=True)
print(r.t+dt,r.y)
print(np.shares_memory(r.y,data))
print(data)
data and r.y are no more synchronized. r.y are no more a view of data.
It looks that the integrate function creates a new instance of its attribute r.y rather than just update it. I have read the source code of this function
https://github.com/scipy/scipy/blob/v0.19.1/scipy/integrate/_ode.py#L396
but it rapidly refers to fortran code, and my understanding abilities stop here.
How can I solve (or got round) this problem by a different way of the data copy between r.y and data (that also implies a manual management of the synchronisation) ?
Is it possible that is a bug in scipy ?
Thanks for your help
Hello and thanks in advance. Fresh off the heels of this question I acquired some more RAM and now have enough memory to fit all the matrices I need to run a linear programming solver. Now the problem is none of the Linear Programming packages in R seem to support long vectors (ie large matrices).
I've tried functions Rsymphony_solve_LP, Rglpk_solve_LP and lp from packages Rsymphony, Rglpk, and lpSolve respectively. All report a similar error to the one below:
Error in rbind(const.mat, const.dir.num, const.rhs) :
long vectors not supported yet: bind.c:1544
I also have my code below in case that helps...the constraint matrix mat is my big matrix (7062 rows by 364520 columns) created using the package bigmemory. When I run this this line the matrix is pulled into memory and then after a while the errors show.
Rsymph <- Rsymphony_solve_LP(obj,mat[1:nrow(mat),1:ncol(mat)],dir,rhs,types=types,max=max, write_lp=T)
I'm guessing this is a hard-coded error in each of the three functions? Is there currently a linear programming solver in R or even Python that supports long vectors? Should I contact the package maintainers or just edit the code myself? Thanks!
The package lpSolveAPI can solve long-vector linear programming problems. You have to first start my declaring a Linear Programming object, then add the constraints:
library(lpSolveAPI)
#Generate Linear Programming Object
lprec <- make.lp(nrow = 0 # Number of Constraints
, ncol = ncol(mat) # Number of Decision Variables
)
#Set Objective Function to Minimize
set.objfn(lprec, obj)
#Add Constraints
#Note Direction and RHS is included along with Constraint Value
for(i in 1:nrow(mat) ){
add.constraint(lprec,mat[i,], dir[i], rhs[i])
print(i)
}
#Set Decision Variable Type
set.type(lprec, c(1:ncol(mat)), type = c("binary"))
#Solve Model
solve(lprec)
#Obtain Solutions
get.total.iter(lprec)
get.objective(lprec)
get.variables(lprec)
There's a good introduction to this package here.
I'm getting this error when I try to compute the logistic function for a data mining method I'm implementing:
RuntimeWarning: overflow encountered in exp
My code:
def logistic_function(x):
# x = np.float64(x)
return 1.0 / (1.0 + np.exp(-x))
If I understood correctly from some related questions that the problem is that np.exp() is returning a huge value. I saw suggestions to let numpy ignore the warnings, but the problem is that when I get this error, then the results of my method are horrible. However when I don't get it, then they are as expected. So making numpy ignoring the warning is not a solution for me at all. I don't know what is wrong or how to deal with.
I don't even know if this is a result of a bug because sometimes I get this error and sometimes not! I went through my code many times and everything looks correct!
You should compute the logistic function using either scipy.special.expit, which in recent enough SciPy is more stable than your solution (although earlier versions got it wrong), or by reducing it to tanh:
def logistic_function(x):
return .5 * (1 + np.tanh(.5 * x))
This version of the function is stable, fast, and fairly accurate.