Python SciPy ODE solver not converging - python

I'm trying to use scipy's ode solver to plot the interaction between a 2D system of equations. I'm attempting to alter the parameters passed to the solver by the following block of code:
# define maximum number of iteration steps for ode solver iteration
m = 1 #power of iteration
N = 2**m #number of steps
# setup a try-catch formulation to increase the number of steps as needed for solution to converge
while True:
try:
z = ode(stateEq).set_integrator("vode",nsteps=N,method='bdf',max_step=5e5)
z.set_initial_value(x0, t0)
for i in range(1, t.size):
if i%1e3 == 0:
print 'still integrating...'
x[i, :] = z.integrate(t[i]) # get one more value, add it to the array
if not z.successful():
raise RuntimeError("Could not integrate")
break
except:
m += 1
N = 2**m
if m%2 == 0:
print 'increasing nsteps...'
print 'nsteps = ', N
Running this never breaks the while loop. It keeps increasing the nsteps forever and the system never gets solved. If I don't put it in the while loop, the system gets solved, I think, because the solution gets plotted. Is the while loop necessary? Am I formulating the solver incorrectly?

The parameter nsteps regulates how many integration steps can be maximally performed during one sampling step (i.e., a call of z.integrate). Its default value is okay if your sampling step is sufficiently small to capture the dynamics. If you want to integrate over a huge time span in one large sampling step (e.g., to get rid of transient dynamics), the value can easily be too small.
The point of this parameter is to avoid problems arising from unexpectedly very long integrations. For example, if you want to perform a given integration for 100 values of a control parameter in a loop over night, you do not want to see on the next morning that the No. 14 was pathological and is still running.
If this is not relevant to you, just set nsteps to a very high value and stop worrying about it. There is certainly no point to successively increase nsteps, you are just performing the same calculations all over again.
Running this never breaks the while loop. It keeps increasing the nsteps forever and the system never gets solved.
This suggests that you have a different problem than nsteps being exceeded, most likely that the problem is not well posed. Carefully read the error message produced by the integrator. I also recommend that you check your differential equations. It may help to look at the solutions until the integration fails to see what is going wrong, i.e., plot x after running this:
z = ode(stateEq)
z.set_integrator("vode",nsteps=1e10,method='bdf',max_step=5e5)
z.set_initial_value(x0, t0)
for i,time in enumerate(t):
x[i,:] = z.integrate(time)
if not z.successful():
break
Your value for max_step is very high (this should not be higher than the time scale of your dynamics). Depending on your application, this may very well be reasonable, but then it suggests that you are working with large numbers. This in turn may mean that the default values of the parameters atol and first_step are not suited for your situation and you need to adjust them.

Related

How is it possible to call functions in PuLP (Python) constraints?

Use case
This is just an easy example for understanding, how and why it is not working as expected.
There is a set of processes, which have a start and a finishing timestamp.
the start timestamp of a process must be after the finishing timestamp of it's predecessor. So far, so good.
Consideration
Regarding constraints: Shouldn't it be possible to carry out more complex operations than arithmetic equations (e.g. queries and case distinctions)?
This is illustrated in the code below.
The standard formulation of the constraint works properly.
But it fails, if you put a function call into the equation.
def func(p):
if self.start_timestamps[p] >= self.end_timestamps[p-1]:
return 1
return 0
# constraint for precedences of processes
for process_idx in self.processes:
if process_idx > 0:
# works fine !
model += self.start_timestamps[process_idx] >= self.end_timestamps[process_idx-1]
# doesn't work, but should?!
model += func(process_idx) == 1
Questions
Are there ways to resolve that via function call? (In a more complex case, for example, you would have to make different queries and iterations within the function.)
If it is not possible with PuLP, are there other OR libraries, which can process things like that?
Thanks!

gtol parameter issues [mystic]

I am trying to optimize a function using the mystic library in Python, and have a general question about the gtol parameter.
The parameter space I'm searching in, is of high dimension (30-40), and for this reason, it seemed fitting for me to set gtol = 40000. Strangely, the algorithm seems to stop after 30000 iterations, although I specified the gtol as 40000, which (by my knowledge) means that the algorithm should have 40000 identical iterations before it stops running.
My function call is pretty basic:
stepmon = VerboseLoggingMonitor(1, 1)
result = diffev2(utilss, x0=[1/value]*value, bounds=[(0,0.2)] * value, npop=150, gtol=40000, disp=True, full_output=True, itermon=stepmon
I checked the merit function's evolution, and it is flat around the 29000th iteration. The last 1000 iterations are identical, but it already stops there instead of running the rest of the 39000 required gtol iterations.
Am I misinterpreting the gtol parameter? Or what am I missing here?
Thanks in advance.
I'm the mystic author. That is the correct interpretation for gtol, however, you are probably ignoring maxiter (and maxfun) which is the maximum number of iterations (and function evaluations) to perform.
If you don't set maxiter (default is None), then the I believe that the default setting for diffev2 is 10 * nDim * nPop. Try setting maxiter=100000, or something like that.
If the termination message says Warning: Maximum number of iterations has been exceeded, then maxiter caused it to stop.

handle trajectory singularity: delete photons causing error

I am coding a black hole (actually photons orbiting a black hole) and I need to handle an exception for radius values that are smaller than the limit distance
I've tried using if and while True
def Hamiltonian(r, pt, pr, pphi):
H = (-((1-rs/r)**-1)*(pt**2)/2 + (1-rs/r)*(pr**2)/2 + (pphi**2)/(2* (r**2)))
if np.amax(H) < 10e-08:
print("Your results are correct")
else:
print("Your results are wrong")
return(H)
def singularity(H, r):
if (r).any < 1.5*rs:
print(H)
else:
print("0")
return(H, r)
print(Hamiltonian(r00, pt00, pr00, pphi00))
I'd like to handle the case r < 1.5*rs so that I don't have the division error message and weird orbits anymore. So far my code doesn't change anything to the problem, I still get this error message :
"RuntimeWarning: divide by zero encountered in true_divide
H = (-((1-rs/r)**-1)*(pt**2)/2 + (1-rs/r)*(pr**2)/2 + (pphi**2)/(2*(r**2)))"
and my orbits are completely wrong (for example my photon should go right in the black hole, but since there's a singularity at r < 1.5*rs, they go in and leave again and so on)
I'd like to delete the photons causing the problem but I don't know how, can anyone please help me?
I think you mention that we do not know exactly what goes on at a singularity. Any answer provided here most likely would not be accurate, but let's just assume that you know the behavior/dynamics at the near 0 neighborhood. I do not know how you are calling the Hamiltonian function but I can imagine you are doing it one of two ways.
You have a pre-defined loop that loops through each value you are passing into the function and outputting the resulting H for each value in that pre-defined loop.
You are passing in vectors of the same length and doing element by element math in the H function.
In the first case, you could either do a pre-check and write a new function for the near 0 behavior and call that new function if you are in the near zero neighborhood. You could also just have the check in the hamiltonian function itself and call the new function if you are in the near 0 neighborhood. This latter method is what I would prefer as it keeps the front-end (the best word I have for it) fairly clean and encapsulates the logic/math in your Hamiltonian function. In your comments you say you want to delete the photon within some radius, and that would be extremely easy to do with this method by just terminating the for loop by breaking out at that time step and plotting what you had until that time step.
In the second case, you would have to manually build the new vector by having checks throughout your vectors if they fall within the singularity neighborhood. This would be a little more difficult and would depend on the shape of your inputs and doing element by element math is more difficult to debug in my opinion.

Python performance questions for: toggling +/-1, set instantiation, set membership check

I've been working on the following code which sort of maximizes the number of unique (in lowest common denominator) p by q blocks with some constraints. It is working perfectly. For small inputs. E.g. input 50000, output 1898.
I need to run it on numbers greater than 10^18, and while I have a different solution that gets the job done, this particular version gets super slow (made my desktop reboot at one point), and this is what my question is about.
I'm trying to figure out what is causing the slowdown in the following code, and to figure out in what order of magnitude they are slow.
The candidates for slowness:
1) the (-1)**(i+1) term? Does Python do this efficiently, or is it literally multiplying out -1 by itself a ton of times?
[EDIT: still looking for how operation.__pow__ works, but having tested setting j=-j: this is faster.]
2) set instantiation/size? Is the set getting too large? Obviously this would impact membership check if the set can't get built.
3) set membership check? This indicates O(1) behavior, although I suppose the constant continues to change.
Thanks in advance for insight into these processes.
import math
import time
a=10**18
ti=time.time()
setfrac=set([1])
x=1
y=1
k=2
while True:
k+=1
t=0
for i in xrange(1,k):
mo = math.ceil(k/2.0)+((-1)**(i+1))*(math.floor(i/2.0)
if (mo/(k-mo) not in setfrac) and (x+(k-mo) <= a and y+mo <= a):
setfrac.add(mo/(k-mo))
x+=k-mo
y+=mo
t+=1
if t==0:
break
print len(setfrac)+1
print x
print y
to=time.time()-ti
print to

Fitting a function to data using pyminuit

I wrote a Python (2.7) program to evaluate some scientific data. Its main task is to fit this data to a certain function (1). Since this is quite a lot of data the program distributes the jobs (= "fit one set of data") to several cores using multiprocessing. In a first attempt I implemented the fitting process using curve_fit from scipy.optimize which works pretty well.
So far, so good. Then we saw that the data is more precisely described by a convolution of function (1) and a gaussian distribution. The idea was to fit the data first to function (1), get the guess values as a result of this and then fit the data again to the convolution. Since the data is quite noisy and I am trying to fit it to a convolution with seven parameters, the results were this time rather bad. Especially the gaussian parameters were in some extend physically impossible.
So I tried implementing the fitting process in PyMinuit because it allows to limit the parameters within certain borders (like a positive amplitude). As I never worked with Minuit before and I try to start small, I rewrote the first ("easy") part of the fitting process. The code snippet doing the job looks like this (simplified):
import minuit
import numpy as np
# temps are the x-values
# pol are the y-values
def gettc(temps, pol, guess_values):
try:
efunc_chi2 = lambda a,b,c,d: np.sum( (efunc(temps, a,b,c,d) - pol)**2 )
fit = minuit.Minuit(efunc_chi2)
fit.values['a'] = guess_values[0]
fit.values['b'] = guess_values[1]
fit.values['c'] = guess_values[2]
fit.values['d'] = guess_values[3]
fit.fixed['d'] = True
fit.maxcalls = 1000
#fit.tol = 1000.0
fit.migrad()
param = fit.args
popc = fit.covariance
except minuit.MinuitError:
return np.zeros(len(guess_values))
return param,popc
Where efunc() is function (1). The parameter d is fixed since I don't use it at the moment.
PyMinuit function reference
Finally, here comes the actual problem: When running the script, Minuit prints for almost every fit
VariableMetricBuilder: Tolerance is not sufficient - edm is 0.000370555 requested 1e-05 continue the minimization
to stdout with different values for edm. The fit still works fine but the printing slows the program considerably down. I tried to increase fit.tol but there are a lot of datasets which return even higher edm's. Then I tried just hiding the output of fit.migrad() using this solution which actually works. And now something strange happens: Somewhere in the middle of the program, the processes on all cores fail simultaneously. Not at the first fit but in the middle of my whole dataset. The only thing I changed, was
with suppress_stdout_stderr():
fit.migrad()
I know this is quite a long introduction, but I think it helps you more when you know the whole framework. I would be very grateful if someone has any idea on how to approach this problem.
Note: Function (1) is defined as
def efunc(x,a,b,c,d):
if x < c:
return a*np.exp(b*x**d)
else:
return a*np.exp(b*c**d) # therefore constant
efunc = np.vectorize(efunc)

Categories