Multiple scipy.integrate.ode instances - python

I would like to use scipy.integrate.ode (or scipy.integrate.odeint) instances in multiple threads (one for each CPU core) in order to solve multiple IVPs at a time. However the documentation says: "This integrator is not re-entrant. You cannot have two ode instances using the “vode” integrator at the same time."
(Also odeint causes internal errors if instantiated multiple times although the documentation does not say so.)
Any idea what can be done?

One option is to use multiprocessing (i.e. use processes instead of threads). Here's an example that uses the map function of the multiprocessing.Pool class.
The function solve takes a set of initial conditions and returns a solution generated by odeint. The "serial" version of the code in the main section calls solve repeatedly, once for each set of initial conditions in ics. The "multiprocessing" version uses the map function of a multiprocessing.Pool instance to run several processes simultaneously, each calling solve. The map function takes care of doling out the arguments to solve.
My computer has four cores, and as I increase num_processes, the speedup maxes out at about 3.6.
from __future__ import division, print_function
import sys
import time
import multiprocessing as mp
import numpy as np
from scipy.integrate import odeint
def lorenz(q, t, sigma, rho, beta):
x, y, z = q
return [sigma*(y - x), x*(rho - z) - y, x*y - beta*z]
def solve(ic):
t = np.linspace(0, 200, 801)
sigma = 10.0
rho = 28.0
beta = 8/3
sol = odeint(lorenz, ic, t, args=(sigma, rho, beta), rtol=1e-10, atol=1e-12)
return sol
if __name__ == "__main__":
ics = np.random.randn(100, 3)
print("multiprocessing:", end='')
tstart = time.time()
num_processes = 5
p = mp.Pool(num_processes)
mp_solutions = p.map(solve, ics)
tend = time.time()
tmp = tend - tstart
print(" %8.3f seconds" % tmp)
print("serial: ", end='')
sys.stdout.flush()
tstart = time.time()
serial_solutions = [solve(ic) for ic in ics]
tend = time.time()
tserial = tend - tstart
print(" %8.3f seconds" % tserial)
print("num_processes = %i, speedup = %.2f" % (num_processes, tserial/tmp))
check = [(sol1 == sol2).all()
for sol1, sol2 in zip(serial_solutions, mp_solutions)]
if not all(check):
print("There was at least one discrepancy in the solutions.")
On my computer, the output is:
multiprocessing: 6.904 seconds
serial: 24.756 seconds
num_processes = 5, speedup = 3.59

SciPy.integrate.ode appears to use the LLNL SUNDIALS solvers, although SciPy doesn't say so explicitly, but they should, in my opinion.
The current version of the CVODE ode solver, 3.2.2, is re-entrant, which means that it can be used to solve multiple problems concurrently. The relevant information appears in User Documentation for CVODE v3.2.0 (SUNDIALS v3.2.0).
All state information used by cvode to solve a given problem is saved in a structure, and a pointer
to that structure is returned to the user. There is no global data in the cvode package, and so, in this
respect, it is reentrant. State information specific to the linear solver is saved in a separate structure,
a pointer to which resides in the cvode memory structure. The reentrancy of cvode was motivated
by the anticipated multicomputer extension, but is also essential in a uniprocessor setting where two
or more problems are solved by intermixed calls to the package from within a single user program.
But I don't know whether SciPy.integrate.ode, or other ode solvers like scikits.odes.ode, support this concurrency.

Related

Implement method of lines to solve PDE in Python scipy with comparable performance to Matlab's ode15s

I want to use the method of lines to solve the thin-film equation. I have implemented it (with gamma=mu=0) Matlab using ode15s and it seems to work fine:
N = 64;
x = linspace(-1,1,N+1);
x = x(1:end-1);
dx = x(2)-x(1);
T = 1e-2;
h0 = 1+0.1*cos(pi*x);
[t,h] = ode15s(#(t,y) thinFilmEq(t,y,dx), [0,T], h0);
function dhdt = thinFilmEq(t,h,dx)
phi = 0;
hxx = (circshift(h,1) - 2*h + circshift(h,-1))/dx^2;
p = phi - hxx;
px = (circshift(p,-1)-circshift(p,1))/dx;
flux = (h.^3).*px/3;
dhdt = (circshift(flux,-1) - circshift(flux,1))/dx;
end
The film just flattens after some time, and for large time the film should tend to h(t->inf)=1. I haven't done any rigorous check and convergence analysis, but at least the result looks promising after only spending less than 5 mins to code it.
I want to do the same thing in Python, and I tried the following:
import numpy as np
import scipy.integrate as spi
def thin_film_eq(t,h,dx):
print(t) # to check the current evaluation time for debugging
phi = 0
hxx = (np.roll(h,1) - 2*h + np.roll(h,-1))/dx**2
p = phi - hxx
px = (np.roll(p,-1) - np.roll(p,1))/dx
flux = h**3*px/3
dhdt = (np.roll(flux,-1) - np.roll(flux,1))/dx
return dhdt
N = 64
x = np.linspace(-1,1,N+1)[:-1]
dx = x[1]-x[0]
T = 1e-2
h0 = 1 + 0.1*np.cos(np.pi*x)
sol = spi.solve_ivp(lambda t,h: thin_film_eq(t,h,dx), (0,T), h0, method='BDF', vectorized=True)
I add a print statement inside the function so I can check the current progress of the program. For some reasons, it is taking very tiny time step and after waiting for a few minutes it is still stuck at t=3.465e-5, with dt smaller than 1e-10. (haven't finished yet by the time I finished typing this question, and it probably won't within any reasonable time). For the Matlab program, it is done within a second with only 14 time steps taken (I only specify the time span, and it outputs 14 time steps with everything else kept at default). I want to ask the following:
Have I done anything wrong which dramatically slows down the computation time for my Python code? What settings should I choose for the solve_ivp function call? One thing I'm not sure is if I do the vectorization properly. Also did I write the function in the correct way? I know this is a stiff ODE, but the ultra-small time step taken by
Is the difference really just down to the difference in the ode solver? scipy.integrate.solve_ivp(f, method='BDF') is the recommended substitute of ode15s according to the official numpy website. But for this particular example the performance difference is one second vs takes ages to solve. The difference is a lot bigger than I thought.
Are there other alternative methods I can try in Python for solving similar PDEs? (something along the line of finite difference/method of lines) I mean utilizing existing libraries, preferably those in scipy.

How to pass a ModelResult class from lmfit in a multiprocess shared array?

I have some issues using lmfit in different process to make my code faster. How can I define some share array that contains the results of every fitting?
I have a cube of data, with position a, b, x and f(x). I made a model in lmfit that work nice and adjust f(x) for one point, returning some parameters. Lmfit returns a class called ModelResult() witch contains every of these parameters and some extra data useful. So, I need run this fitting for each a and b, and later make a cube with this parameters and maybe the extra data. I can run this in a linear way (without parallelize) but I have more than 1000 points and the model is complicated, so it take more than a 15000 seconds.
My problem start when I use the multiprocessing lib. I need to share data between every process so when a process finish, lock the variable and store the results inside, and later unlock the variable. The multiprocessing lib has Value() or Array() to do what I need. I intented to use Array, and made some change of variable to pass from a and b to c where c is the range of a*b. But I can't define an Array to keep the ModelResult() for each c.
Here the code:
import multiprocessing as mp
import numpy as np
import time
from lmfit import Model
from numpy import sqrt, exp, pi
#Set time zero
start_time = time.time()
#Example of functions to fit
def gaussian(x, amp, cen, wid):
"""1-d gaussian: gaussian(x, amp, cen, wid)"""
return (amp / (sqrt(2*pi) * wid)) * exp(-(x-cen)**2 / (2*wid**2))
def linear(x, slope, intercept):
"""a linear function"""
return slope*x + intercept
#Function to fit every point
def fit_point(a,b,data_cube,x,pars,mod):
pos=int(b+(a*10))
data_point = np.array(data_cube[:,a,b])
#print(pos,mp.current_process().name)
error_point= np.array((data_point*0)+0.002) #Example error
res_point=mod.fit(data_point, pars, weights=1./error_point, x=x)
print(res_point.fit_report())
cube_res[pos]=res_point
return #cube_res[:]
#Invented some data
x=np.arange(10)
data_cube=np.random.rand(10, 10, 10)
#Example of a model with 2 gaussians and a line
mod = Model(linear, prefix='l_')+Model(gaussian, prefix='g1_')+Model(gaussian, prefix='g2_')
pars= mod.make_params()
pars['g1_amp'].set(0.5)
pars['g2_amp'].set(0.5)
pars['g1_cen'].set(2)
pars['g2_cen'].set(3)
pars['g1_wid'].set(0.5)
pars['g2_wid'].set(0.5)
pars['l_slope'].set(1)
pars['l_intercept'].set(1)
#Definition of the shared Array #Where I think there is the problem!
cube_res = mp.Array('u', 100)
#Definition of the process and starts
processes = []
for a in np.arange(0,10):
for b in np.arange(0,10):
process = mp.Process(target=fit_point, args=(a,b,data_cube,x,pars,mod))
process.start()
processes.append(process)
for process in processes:
process.join()
print('Time count')
print("--- %s seconds ---" % (time.time() - start_time))
#Intent to print some results
#print(cube_res[20].fit_report)
#Final, to recover a,b
#final_cube_res = np.reshape(cube_res, (100,100))
The errors are:
TypeError: unicode string expected instead of ModelResult instance
This is because I define mp.Array('u', 100), where 'u' is unicode and 100 is the range.
I don't know how to define the Array to can save the ModelResult inside.
Thanks for reading!
The error message is telling you that you cannot put a lmfit.ModelResult into a string. In fact, your
cube_res = mp.Array('u', 100)
is saying that cube_res is an Array of 100 unicode characters. That is, each result would need to be 1 character long. I think what you want instead is to use a multiprocessing.Manger().dict to hold the results.
cube_res = mp.Manager().dict()
This will allow you to use the (x, y) position as the key and you can then put the ModelResult as the value....
... but: you won't be able to save the ModelResult directly because the data shared between multiprocesses needs to be pickled, and in general complex objects, especially those with methods, are not easily picklable. The good news is that lmfit.ModelResult has a dumps() method that you can use to convert that object into a picklable json string. So then your code would use
res_point=mod.fit(data_point, pars, weights=1./error_point, x=x)
print(res_point.fit_report())
cube_res[(a,b)] = res_point.dumps()
return
in your fit_point() function.
We're not done yet, because restoring the ModelResult from the dumped json string is slightly complicated (lmfit has helper functions for this, but these assume you've written the results to a file). You would have to do this after you've joined all your processes:
print('Time count')
print("--- %s seconds ---" % (time.time() - start_time))
import lmfit
# make a dummy ModelResult -- we'll overwrite everything for this
modres = lmfit.model.ModelResult(lmfit.Model(gaussian), lmfit.Parameters())
# make a dictionary of the functions you actually used (the function *names*
# are included in the dumped string, but not the functions themselves)
funcdefs = {'gaussian': gaussian, 'linear': linear}
for pos, dumpval in cube_res.items():
modelresult = modres.loads(dumpval, funcdefs=funcdefs)
print('### Result for fit ', pos)
print(modelresult.fit_report())
I think that should get you started...

Numeric Integration Python versus Matlab

My python code takes about 6.2 seconds to run. The Matlab code runs in under 0.05 seconds. Why is this and what can I do to speed up the Python code? Is Cython the solution?
Matlab:
function X=Test
nIter=1000000;
Step=.001;
X0=1;
X=zeros(1,nIter+1); X(1)=X0;
tic
for i=1:nIter
X(i+1)=X(i)+Step*(X(i)^2*cos(i*Step+X(i)));
end
toc
figure(1) plot(0:nIter,X)
Python:
nIter = 1000000
Step = .001
x = np.zeros(1+nIter)
x[0] = 1
start = time.time()
for i in range(1,1+nIter):
x[i] = x[i-1] + Step*x[i-1]**2*np.cos(Step*(i-1)+x[i-1])
end = time.time()
print(end - start)
How to speed up your Python code
Your largest time sink is np.cos which performs several checks on the format of the input.
These are relevant and usually negligible for high-dimensional inputs, but for your one-dimensional input, this becomes the bottleneck.
The solution to this is to use math.cos, which only accepts one-dimensional numbers as input and thus is faster (though less flexible).
Another time sink is indexing x multiple times.
You can speed this up by having one state variable which you update and only writing to x once per iteration.
With all of this, you can speed up things by a factor of roughly ten:
import numpy as np
from math import cos
nIter = 1000000
Step = .001
x = np.zeros(1+nIter)
state = x[0] = 1
for i in range(nIter):
state += Step*state**2*cos(Step*i+state)
x[i+1] = state
Now, your main problem is that your truly innermost loop happens completely in Python, i.e., you have a lot of wrapping operations that eat up time.
You can avoid this by using uFuncs (e.g., created with SymPy’s ufuncify) and using NumPy’s accumulate:
import numpy as np
from sympy.utilities.autowrap import ufuncify
from sympy.abc import t,y
from sympy import cos
nIter = 1000000
Step = 0.001
state = x[0] = 1
f = ufuncify([y,t],y+Step*y**2*cos(t+y))
times = np.arange(0,nIter*Step,Step)
times[0] = 1
x = f.accumulate(times)
This runs practically within an instant.
… and why that’s not what you should worry about
If your exact code (and only that) is what you care about, then you shouldn’t worry about runtime anyway, because it’s very short either way.
If on the other hand, you use this to gauge efficiency for problems with a considerable runtime, your example will fail because it considers only one initial condition and is a very simple dynamics.
Moreover, you are using the Euler method, which is either not very efficient or robust, depending on your step size.
The latter (Step) is absurdly low in your case, yielding much more data than you probably need:
With a step size of 1, You can see what’s going on just fine.
If you want a robust integration in such cases, it’s almost always best to use a modern adaptive integrator, that can adjust its step size itself, e.g., here is a solution to your problem using a native Python integrator:
from math import cos
import numpy as np
from scipy.integrate import solve_ivp
T = 1000
dt = 0.001
x = solve_ivp(
lambda t,state: state**2*cos(t+state),
t_span = (0,T),
t_eval = np.arange(0,T,dt),
y0 = [1],
rtol = 1e-5
).y
This automatically adjusts the step size to something higher, depending on the error tolerance rtol.
It still returns the same amount of output data, but that’s via interpolation of the solution.
It runs in 0.3 s for me.
How to speed up things in a scalable manner
If you still need to speed up something like this, chances are that your derivative (f) is considerably more complex than in your example and thus it is the bottleneck.
Depending on your problem, you may be able to vectorise its calcultion (using NumPy or similar).
If you can’t vectorise, I wrote a module that specifically focusses on this by hard-coding your derivative under the hood.
Here is your example in with a sampling step of 1.
import numpy as np
from jitcode import jitcode,y,t
from symengine import cos
T = 1000
dt = 1
ODE = jitcode([y(0)**2*cos(t+y(0))])
ODE.set_initial_value([1])
ODE.set_integrator("dop853")
x = np.hstack([ODE.integrate(t) for t in np.arange(0,T,dt)])
This runs again within an instant. While this may not be a relevant speed boost here, this is scalable to huge systems.
The difference is jit-compilation, which Matlab uses per default. Let's try your example with Numba(a Python jit-compiler)
Code
import numba as nb
import numpy as np
import time
nIter = 1000000
Step = .001
#nb.njit()
def integrate(nIter,Step):
x = np.zeros(1+nIter)
x[0] = 1
for i in range(1,1+nIter):
x[i] = x[i-1] + Step*x[i-1]**2*np.cos(Step*(i-1)+x[i-1])
return x
#Avoid measuring the compilation time,
#this would be also recommendable for Matlab to have a fair comparison
res=integrate(nIter,Step)
start = time.time()
for i in range(100):
res=integrate(nIter,Step)
end=time.time()
print((end - start)/100)
This results in 0.022s runtime per call.

Multiprocessing in Python - How to use all cores when running DBSCAN?

I coded some code using DBSCAN, but when run predict function, program use only 1 CPU(100%) while other CPUs is free. Anyone can modify the below "dbscan_predict" function so it can use all 8 CPUs that my computer have???(CPU 4 cores 8 threads)
def dbscan_predict(dbscan_model, X_new, metric=np.linalg.norm):
# Result is noise by default
y_new = np.ones(shape=len(X_new), dtype=int)*(-1)
# Iterate all input samples for a label
for j, x_new in enumerate(X_new):
# Find a core sample closer than EPS
for i, x_core in enumerate(dbscan_model.components_):
if metric(x_new - x_core) < dbscan_model.eps:
# Assign label of x_core to x_new
y_new[j] = dbscan_model.labels_[dbscan_model.core_sample_indices_[i]]
break
return y_new
Use the multiprocessing package of python. You can define the function and split you dataframe inside the multiprocessing.pool() method. Then this will distribute the work on different cores.
You can check the number of cores using multiprocessing.cpu_count() as well and then take the decision on the basis of that.

Python - multiprocessing for matplotlib griddata

Following my former question [1], I would like to apply multiprocessing to matplotlib's griddata function. Is it possible to split the griddata into, say 4 parts, one for each of my 4 cores? I need this to improve performance.
For example, try the code below, experimenting with different values for size:
import numpy as np
import matplotlib.mlab as mlab
import time
size = 500
Y = np.arange(size)
X = np.arange(size)
x, y = np.meshgrid(X, Y)
u = x * np.sin(5) + y * np.cos(5)
v = x * np.cos(5) + y * np.sin(5)
test = x + y
tic = time.clock()
test_d = mlab.griddata(
x.flatten(), y.flatten(), test.flatten(), x+u, y+v, interp='linear')
toc = time.clock()
print 'Time=', toc-tic
I ran the example code below in Python 3.4.2, with numpy version 1.9.1 and matplotlib version 1.4.2, on a Macbook Pro with 4 physical CPUs (i.e., as opposed to "virtual" CPUs, which the Mac hardware architecture also makes available for some use cases):
import numpy as np
import matplotlib.mlab as mlab
import time
import multiprocessing
# This value should be set much larger than nprocs, defined later below
size = 500
Y = np.arange(size)
X = np.arange(size)
x, y = np.meshgrid(X, Y)
u = x * np.sin(5) + y * np.cos(5)
v = x * np.cos(5) + y * np.sin(5)
test = x + y
tic = time.clock()
test_d = mlab.griddata(
x.flatten(), y.flatten(), test.flatten(), x+u, y+v, interp='linear')
toc = time.clock()
print('Single Processor Time={0}'.format(toc-tic))
# Put interpolation points into a single array so that we can slice it easily
xi = x + u
yi = y + v
# My example test machine has 4 physical CPUs
nprocs = 4
jump = int(size/nprocs)
# Enclose the griddata function in a wrapper which will communicate its
# output result back to the calling process via a Queue
def wrapper(x, y, z, xi, yi, q):
test_w = mlab.griddata(x, y, z, xi, yi, interp='linear')
q.put(test_w)
# Measure the elapsed time for multiprocessing separately
ticm = time.clock()
queue, process = [], []
for n in range(nprocs):
queue.append(multiprocessing.Queue())
# Handle the possibility that size is not evenly divisible by nprocs
if n == (nprocs-1):
finalidx = size
else:
finalidx = (n + 1) * jump
# Define the arguments, dividing the interpolation variables into
# nprocs roughly evenly sized slices
argtuple = (x.flatten(), y.flatten(), test.flatten(),
xi[:,(n*jump):finalidx], yi[:,(n*jump):finalidx], queue[-1])
# Create the processes, and launch them
process.append(multiprocessing.Process(target=wrapper, args=argtuple))
process[-1].start()
# Initialize an array to hold the return value, and make sure that it is
# null-valued but of the appropriate size
test_m = np.asarray([[] for s in range(size)])
# Read the individual results back from the queues and concatenate them
# into the return array
for q, p in zip(queue, process):
test_m = np.concatenate((test_m, q.get()), axis=1)
p.join()
tocm = time.clock()
print('Multiprocessing Time={0}'.format(tocm-ticm))
# Check that the result of both methods is actually the same; should raise
# an AssertionError exception if assertion is not True
assert np.all(test_d == test_m)
and I got the following result:
/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/matplotlib/tri/triangulation.py:110: FutureWarning: comparison to `None` will result in an elementwise object comparison in the future.self._neighbors)
Single Processor Time=8.495998
Multiprocessing Time=2.249938
I'm not really sure what is causing the "future warning" from triangulation.py (evidently my version of matplotlib did not like something about the input values that were originally provided for the question), but regardless, the multiprocessing does appear to achieve the desired speedup of 8.50/2.25 = 3.8, (edit: see comments) which is roughly in the neighborhood of about 4X that we would expect for a machine with 4 CPUs. And the assertion statement at the end also executes successfully, proving that the two methods get the same answer, so in spite of the slightly weird warning message, I believe that the code above is a valid solution.
EDIT: A commenter has pointed out that both my solution, as well as the code snippet posted by the original author, are likely using the wrong method, time.clock(), for measuring execution time; he suggests using time.time() instead. I think I'm also coming around to his point of view. (Digging into the Python documentation a bit further, I'm still not convinced that even this solution is 100% correct, as newer versions of Python appear to have deprecated time.clock() in favor of time.perf_counter() and time.process_time(). But regardless, I do agree that whether or not time.time() is absolutely the most correct way of taking this measurement, it's still probably more correct than what I had been using before, time.clock().)
Assuming the commenter's point is correct, then it means the approximately 4X speedup that I thought I had measured is in fact wrong.
However, that does not mean that the underlying code itself wasn't correctly parallelized; rather, it just means that parallelization didn't actually help in this case; splitting up the data and running on multiple processors didn't improve anything. Why would this be? Other users have pointed out that, at least in numpy/scipy, some functions run on multiple cores, and some do not, and it can be a seriously challenging research project for an end-user to try to figure out which ones are which.
Based on the results of this experiment, if my solution correctly achieves parallelization within Python, but no further speedup is observed, then I would suggest the simplest likely explanation is that matplotlib is probably also parallelizing some of its functions "under the hood", so to speak, in compiled C++ libraries, just like numpy/scipy already do. Assuming that's the case, then the correct answer to this question would be that nothing further can be done: further parallelizing in Python will do no good if the underlying C++ libraries are already silently running on multiple cores to begin with.

Categories