I'm trying to obtain the function expected_W or H that is the result of an integration:
where:
theta is a vector with two elements: theta_0 and theta_1
f(beta | theta) is a normal density for beta with mean theta_0 and variance theta_1
q(epsilon) is a normal density for epsilon with mean zero and variance sigma_epsilon (set to 1 by default).
w(p, theta, eps, beta) is a function I take as input, so I cannot predict exactly how it looks. It will likely be non-linear, but not particularly nasty.
This is the way I implement the problem. I'm sure the wrapper functions I make are a mess, so I'd be happy to receive any help on that too.
from __future__ import division
from scipy import integrate
from scipy.stats import norm
import math
import numpy as np
def exp_w(w_B, sigma_eps = 1, **kwargs):
'''
Integrates the w_B function
Input:
+ w_B : the function to be integrated.
+ sigma_eps : variance of the epsilon term. Set to 1 by default
'''
#The integrand function gives everything under the integral:
# w(B(p, \theta, \epsilon, \beta)) f(\beta | \theta ) q(\epsilon)
def integrand(eps, beta, p, theta_0, theta_1, sigma_eps=sigma_eps):
q_e = norm.pdf(eps, loc=0, scale=math.sqrt(sigma_eps))
f_beta = norm.pdf(beta, loc=theta_0, scale=math.sqrt(theta_1))
return w_B(p = p,
theta_0 = theta_0, theta_1 = theta_1,
eps = eps, beta=beta)* q_e *f_beta
#limits of integration. Using limited support for now.
eps_inf = lambda beta : -10 # otherwise: -np.inf
eps_sup = lambda beta : 10 # otherwise: np.inf
beta_inf = -10
beta_sup = 10
def integrated_f(p, theta_0, theta_1):
return integrate.dblquad(integrand, beta_inf, beta_sup,
eps_inf, eps_sup,
args = (p, theta_0, theta_1))
# this integrated_f is the H referenced at the top of the question
return integrated_f
I tested this function with a simple w function for which I know the analytic solution (this won't usually be the case).
def test_exp_w():
def w_B(p, theta_0, theta_1, eps, beta):
return 3*(p*eps + p*(theta_0 + theta_1) - beta)
# Function that I get
integrated = exp_w(w_B, sigma_eps = 1)
# Function that I should get
def exp_result(p, theta_0, theta_1):
return 3*p*(theta_0 + theta_1) - 3*theta_0
args = np.random.rand(3)
d_args = {'p' : args[0], 'theta_0' : args[1], 'theta_1' : args[2]}
if not (np.allclose(
integrated(**d_args)[0], exp_result(**d_args)) ):
raise Exception("Integration procedure isn't working!")
Hence, my implementation seems to be working, but it's very slow for my purpose. I need to repeat this process with tens or hundreds of thousands of times (this is a step in a Value function iteration. I can give more info if people think it's relevant).
With scipy version 0.14.0 and numpy version 1.8.1, this integral takes 15 seconds to compute.
Does anybody have any suggestion on how to go about this?
To start with, tt probably would help to get bounded domains of integration, but I haven't figure out how to do that or if the gaussian quadrature in SciPy takes care of it in a good way (does it use Gauss-Hermite?).
Thanks for your time.
---- Edit: adding profiling times -----
%lprun results gives that most of the time is spent in
_distn_infraestructure.py:1529(pdf) and
_continuous_distns.py:97(_norm_pdf)
each with a whopping 83244 number calls.
The time taken to integrate your function sounds very long if the function is not a nasty one.
First thing I suggest you do is to profile where the time is spent. Is it spent in dblquad or elsewhere? How many calls are made to w_B during the integration? If the time is spent in dblquad and the number of calls is very high, could you use looser tolerances in the integration?
It seems that the multiplication by the gaussians actually enables you to limit the integration limits a great deal, as most of the energy of the gaussian is within a very small area. You might want to try and calculate reasonable tighter bounds. You have already limited the area into -10..10; is there any significant performance change between -100..100, -10..10, and -1..1?
If you know your functions are relatively smooth, then there is a Mickey-Mouse version of the integration:
determine reasonable upper and lower limits in both axes (by the gaussians)
calculate a reasonable grid density (e.g. 100 points in each direction)
calculate the w_B for each of these points (and this will be much faster, if it is possible to require a vectorized version of w_B)
sum it all together
This is very low-tech but also very fast. Whether or not it gives you results which are good enough for the outer iteration is an interesting question. It just might.
Related
I am new to stackoverflow and also quite new to Python. So, I hope to ask my question in an appropriate manner.
I am running a Python code similar to this minimal example with an example function that is a product of a lorentzian with a cosinus that I want to numerically integrate:
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import quad
#minimal example:
omega_loc = 15
gamma = 5
def Lorentzian(w):
#print(w)
return (w**3)/((w/omega_loc) + 1)**2*(gamma/2)/((w-omega_loc)**2+(gamma/2)**2)
def intRe(t):
return quad(lambda w: w**(-2)*Lorentzian(w)*(1-np.cos(w*t)),0,np.inf,limit=10000)[0]
plt.figure(1)
plot_range = np.linspace(0,100,1000)
plt.plot(plot_range, [intRe(t) for t in plot_range])
Independent on the upper limit of the integration I never get the code to run and to give me a result.
When I enable the #print(w) line it seems like the code just keeps on probing the integral at random different values of w in an infinite loop (?). Also the console gives me a detection of a roundoff error.
Is there a different way for numerical integration in Python that is better suited for this kind of function than the quad function or did I do a more fundamental error?
Observations
Close to zero (1 - cos(w*t)) / w**2 tends to 0/0. We can take the taylor expansion t**2(1/2 - (w*t)**2/24).
Going to the infinity the Lorentzian is a constant and the cosine term will cause the output to oscillate indefinitely, the integral can be approximated by multiplying that term by a slowly decreasing term.
You are using a linearly spaced scale with many points. It is easier to visualize with w in log scale.
The plot looks like this before damping the cosine term
I introduced two parameters to tune the attenuation of the oscilations
def cosinus_term(w, t, damping=1e4*omega_loc):
return np.where(abs(w*t) < 1e-6, t**2*(0.5 - (w*t)**2/24.0), (1-np.exp(-abs(w/damping))*np.cos(w*t))/w**2)
def intRe(t, damping=1e4*omega_loc):
return quad(lambda w: cosinus_term(w, t)*Lorentzian(w),0,np.inf,limit=10000)[0]
Plotting with the following code
plt.figure(1)
plot_range = np.logspace(-3,3,100)
plt.plot(plot_range, [intRe(t, 1e2*omega_loc) for t in plot_range])
plt.plot(plot_range, [intRe(t, 1e3*omega_loc) for t in plot_range])
plt.xscale('log')
It runs in less than 3 minutes here, and the two results are close to each other, specially for large w, suggesting that the damping doesn't affect too much the result.
I was wondering if anybody knows of a reliable and fast analytic HDI calculation, preferably for beta functions.
The definition of the HDI is in this question called "Highest Posterior Density Region".
I am looking for a function that has the following I/O:
Input
credMass - credible interval mass (e.g, 0.95 for 95% credible interval)
a - shape parameter (e.g, number of HEADS coin tosses)
b - shape parameter (e.g, number of TAILS coin tosses)
output
ci_min - minimum bound of mass (value between 0. to ci_max)
ci_max - maximum bound of mass (value between ci_min to 1.)
One way to go about it is to speed up this script that I found
in the same said question (adapted from R to Python) and taken from the book Doing bayesian data analysis by John K. Kruschke). I have used this solution, which is quite reliable, but it's a bit too slow for multiple calls. A speedup by 100X or even 10X would be very helpful!
from scipy.optimize import fmin
from scipy.stats import *
def HDIofICDF(dist_name, credMass=0.95, **args):
# freeze distribution with given arguments
distri = dist_name(**args)
# initial guess for HDIlowTailPr
incredMass = 1.0 - credMass
def intervalWidth(lowTailPr):
return distri.ppf(credMass + lowTailPr) - distri.ppf(lowTailPr)
# find lowTailPr that minimizes intervalWidth
HDIlowTailPr = fmin(intervalWidth, incredMass, ftol=1e-8, disp=False)[0]
# return interval as array([low, high])
return distri.ppf([HDIlowTailPr, credMass + HDIlowTailPr])
usage
print HDIofICDF(beta, credMass=0.95, a=5, b=4)
Word of caution! Some solutions confuse this HDI with the Equal Tail Interval solution (which the said question calls "Central Credible Region") which is far easier to compute, but does not answer the same question. (E.g, see Kruschke's Why HDI and not equal-tailed interval?)
Also this question does not concern the MCMC approaches that I have seen in PyMC3 (pymc3.stats.hpd(a), where a is a random variable sample), but rather regards an analytical solution.
Thank you!
Not analytical, but just in case you still find this useful. Instead of evaluating the ppf (that can be very slow for many distributions) you can find an approximate solution by evaluating the pdf, something like this.
pdf = dist.pdf(x_vals)
pdf = pdf / pdf.sum()
idx = np.argsort(pdf)[::-1]
mass_cum = 0
indices = []
for i in idx:
mass_cum += pdf[i]
indices.append(i)
if mass_cum >= mass:
break
return x_vals[np.sort(indices)[[0, -1]]]
Where mass is a number between 0 and 1. And x_vals is a vector of equally spaced values in the support of the distribution. So for the beta x_vals = np.linspace(0, 1, size) should do the work. You can control the tradeoff of accuracy and speed by setting a proper value of size for your problem. How much faster this solution is compared with the other one (using the ppf and optimization) will depend on the details of the SciPy's ppf and pdf method (that will be different for different distributions). And could even depend on the parameters of the distribution.
I'm trying to solve a system of coupled, first-order ODEs in Python. I'm new to this, but the Zombie Apocalypse example from SciPy.org has been a great help so far.
An important difference in my case is that the input data used to "drive" my system of ODEs changes abruptly at various time points and I'm not sure how best to deal with this. The code below is the simplest example I can think of to illustrate my problem. I appreciate this example has a straightforward analytical solution, but my actual system of ODEs is more complicated, which is why I'm trying to understand the basics of numerical methods.
Simplified example
Consider a bucket with a hole in the bottom (this kind of "linear reservoir" is the basic building block of many hydrological models). The input flow rate to the bucket is R and the output from the hole is Q. Q is assumed to be proportional to the volume of water in the bucket, V. The constant of proportionality is usually written as , where T is the "residence time" of the store. This gives a simple ODE of the form
In reality, R is an observed time series of daily rainfall totals. Within each day, the rainfall rate is assumed to be constant, but between days the rate changes abruptly (i.e. R is a discontinuous function of time). I'm trying to understand the implications of this for solving my ODEs.
Strategy 1
The most obvious strategy (to me at least) is to apply SciPy's odeint function separately within each rainfall time step. This means I can treat R as a constant. Something like this:
import numpy as np, pandas as pd, matplotlib.pyplot as plt, seaborn as sn
from scipy.integrate import odeint
np.random.seed(seed=17)
def f(y, t, R_t):
""" Function to integrate.
"""
# Unpack parameters
Q_t = y[0]
# ODE to solve
dQ_dt = (R_t - Q_t)/T
return dQ_dt
# #############################################################################
# User input
T = 10 # Time constant (days)
Q0 = 0. # Initial condition for outflow rate (mm/day)
days = 300 # Number of days to simulate
# #############################################################################
# Create a fake daily time series for R
# Generale random values from uniform dist
df = pd.DataFrame({'R':np.random.uniform(low=0, high=5, size=days+20)},
index=range(days+20))
# Smooth with a moving window to make more sensible
df['R'] = pd.rolling_mean(df['R'], window=20)
# Chop off the NoData at the start due to moving window
df = df[20:].reset_index(drop=True)
# List to store results
Q_vals = []
# Vector of initial conditions
y0 = [Q0, ]
# Loop over each day in the R dataset
for step in range(days):
# We want to find the value of Q at the end of this time step
t = [0, 1]
# Get R for this step
R_t = float(df.ix[step])
# Solve the ODEs
soln = odeint(f, y0, t, args=(R_t,))
# Extract flow at end of step from soln
Q = float(soln[1])
# Append result
Q_vals.append(Q)
# Update initial condition for next step
y0 = [Q, ]
# Add results to df
df['Q'] = Q_vals
Strategy 2
The second approach involves simply feeding everything to odeint and letting it deal with the discontinuities. Using the same parameters and R values as above:
def f(y, t):
""" Function used integrate.
"""
# Unpack incremental values for S and D
Q_t = y[0]
# Get the value for R at this t
idx = df.index.get_loc(t, method='ffill')
R_t = float(df.ix[idx])
# ODE to solve
dQ_dt = (R_t - Q_t)/T
return dQ_dt
# Vector of initial parameter values
y0 = [Q0, ]
# Time grid
t = np.arange(0, days, 1)
# solve the ODEs
soln = odeint(f, y0, t)
# Add result to df
df['Q'] = soln[:, 0]
Both of these approaches give identical answers, which look like this:
However the second strategy, although more compact in terms of code, it much slower than the first. I guess this is something to do with the discontinuities in R causing problems for odeint?
My questions
Is strategy 1 the best approach here, or is there a better way?
Is strategy 2 a bad idea and why is it so slow?
Thank you!
1.) Yes
2.) Yes
Reason for both: Runge-Kutta solvers expect ODE functions that have an order of differentiability at least as high as the order of the solver. This is needed so that the Taylor expansion which gives the expected error term exists. Which means that even the order 1 Euler method expects a differentiable ODE function. Thus no jumps are allowed, kinks can be tolerated in order 1, but not in higher order solvers.
This is especially true for implementations with automatic step size adaptations. Whenever a point is approached where the differentiation order is not satisfied, the solver sees a stiff system and drives the step-size toward 0, which leads to a slowdown of the solver.
You can combine strategies 1 and 2 if you use a solver with fixed step size and a step size that is a fraction of 1 day. Then the sampling points at the day turns serve as (implicit) restart points with the new constant.
I want to solve this kind of problem:
dy/dt = 0.01*y*(1-y), find t when y = 0.8 (0<t<3000)
I've tried the ode function in Python, but it can only calculate y when t is given.
So are there any simple ways to solve this problem in Python?
PS: This function is just a simple example. My real problem is so complex that can't be solve analytically. So I want to know how to solve it numerically. And I think this problem is more like an optimization problem:
Objective function y(t) = 0.8, Subject to dy/dt = 0.01*y*(1-y), and 0<t<3000
PPS: My real problem is:
objective function: F(t) = 0.85,
subject to: F(t) = sqrt(x(t)^2+y(t)^2+z(t)^2),
x''(t) = (1/F(t)-1)*250*x(t),
y''(t) = (1/F(t)-1)*250*y(t),
z''(t) = (1/F(t)-1)*250*z(t)-10,
x(0) = 0, y(0) = 0, z(0) = 0.7,
x'(0) = 0.1, y'(0) = 1.5, z'(0) = 0,
0<t<5
This differential equation can be solved analytically quite easily:
dy/dt = 0.01 * y * (1-y)
rearrange to gather y and t terms on opposite sides
100 dt = 1/(y * (1-y)) dy
The lhs integrates trivially to 100 * t, rhs is slightly more complicated. We can always write a product of two quotients as a sum of the two quotients * some constants:
1/(y * (1-y)) = A/y + B/(1-y)
The values for A and B can be worked out by putting the rhs on the same denominator and comparing constant and first order y terms on both sides. In this case it is simple, A=B=1. Thus we have to integrate
1/y + 1/(1-y) dy
The first term integrates to ln(y), the second term can be integrated with a change of variables u = 1-y to -ln(1-y). Our integrated equation therefor looks like:
100 * t + C = ln(y) - ln(1-y)
not forgetting the constant of integration (it is convenient to write it on the lhs here). We can combine the two logarithm terms:
100 * t + C = ln( y / (1-y) )
In order to solve t for an exact value of y, we first need to work out the value of C. We do this using the initial conditions. It is clear that if y starts at 1, dy/dt = 0 and the value of y never changes. Thus plug in the values for y and t at the beginning
100 * 0 + C = ln( y(0) / (1 - y(0) )
This will give a value for C (assuming y is not 0 or 1) and then use y=0.8 to get a value for t. Note that because of the logarithm and the factor 100 multiplying t y will reach 0.8 within a relatively short range of t values, unless the initial value of y is incredibly small. It is of course also straightforward to rearrange the equation above to express y in terms of t, then you can plot the function as well.
Edit: Numerical integration
For a more complexed ODE which cannot be solved analytically, you will have to try numerically. Initially we only know the value of the function at zero time y(0) (we have to know at least that in order to uniquely define the trajectory of the function), and how to evaluate the gradient. The idea of numerical integration is that we can use our knowledge of the gradient (which tells us how the function is changing) to work out what the value of the function will be in the vicinity of our starting point. The simplest way to do this is Euler integration:
y(dt) = y(0) + dy/dt * dt
Euler integration assumes that the gradient is constant between t=0 and t=dt. Once y(dt) is known, the gradient can be calculated there also and in turn used to calculate y(2 * dt) and so on, gradually building up the complete trajectory of the function. If you are looking for a particular target value, just wait until the trajectory goes past that value, then interpolate between the last two positions to get the precise t.
The problem with Euler integration (and with all other numerical integration methods) is that its results are only accurate when its assumptions are valid. Because the gradient is not constant between pairs of time points, a certain amount of error will arise for each integration step, which over time will build up until the answer is completely inaccurate. In order to improve the quality of the integration, it is necessary to use more sophisticated approximations to the gradient. Check out for example the Runge-Kutta methods, which are a family of integrators which remove progressive orders of error term at the cost of increased computation time. If your function is differentiable, knowing the second or even third derivatives can also be used to reduce the integration error.
Fortunately of course, somebody else has done the hard work here, and you don't have to worry too much about solving problems like numerical stability or have an in depth understanding of all the details (although understanding roughly what is going on helps a lot). Check out http://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.ode.html#scipy.integrate.ode for an example of an integrator class which you should be able to use straightaway. For instance
from scipy.integrate import ode
def deriv(t, y):
return 0.01 * y * (1 - y)
my_integrator = ode(deriv)
my_integrator.set_initial_value(0.5)
t = 0.1 # start with a small value of time
while t < 3000:
y = my_integrator.integrate(t)
if y > 0.8:
print "y(%f) = %f" % (t, y)
break
t += 0.1
This code will print out the first t value when y passes 0.8 (or nothing if it never reaches 0.8). If you want a more accurate value of t, keep the y of the previous t as well and interpolate between them.
As an addition to Krastanov`s answer:
Aside of PyDSTool there are other packages, like Pysundials and Assimulo which provide bindings to the solver IDA from Sundials. This solver has root finding capabilites.
Use scipy.integrate.odeint to handle your integration, and analyse the results afterward.
import numpy as np
from scipy.integrate import odeint
ts = np.arange(0,3000,1) # time series - start, stop, step
def rhs(y,t):
return 0.01*y*(1-y)
y0 = np.array([1]) # initial value
ys = odeint(rhs,y0,ts)
Then analyse the numpy array ys to find your answer (dimensions of array ts matches ys). (This may not work first time because I am constructing from memory).
This might involve using the scipy interpolate function for the ys array, such that you get a result at time t.
EDIT: I see that you wish to solve a spring in 3D. This should be fine with the above method; Odeint on the scipy website has examples for systems such as coupled springs that can be solved for, and these could be extended.
What you are asking for is a ODE integrator with root finding capabilities. They exist and the low-level code for such integrators is supplied with scipy, but they have not yet been wrapped in python bindings.
For more information see this mailing list post that provides a few alternatives: http://mail.scipy.org/pipermail/scipy-user/2010-March/024890.html
You can use the following example implementation which uses backtracking (hence it is not optimal as it is a bolt-on addition to an integrator that does not have root finding on its own): https://github.com/scipy/scipy/pull/4904/files
I have a piece of code which increments a time-step for the so-called Lorenz95 model (invented by Ed Lorenz in 1995). It's normally implemented as a 40-variable model, and displays chaotic behaviour. I have coded up the time-stepping for the algorithm as follows:
class Lorenz:
'''Lorenz-95 equation'''
global F, dt, SIZE
F = 8
dt = 0.01
SIZE = 40
def __init__(self):
self.x = [random.random() for i in range(SIZE)]
def euler(self):
'''Euler time stepping'''
newvals = [0]*SIZE
for i in range(SIZE-1):
newvals[i] = self.x[i] + dt * (self.x[i-1] * (self.x[i+1] - self.x[i-2]) - self.x[i] + F)
newvals[SIZE-1] = self.x[SIZE-1] + dt * (self.x[SIZE-2] * (self.x[0] - self.x[SIZE-3]) - self.x[SIZE-1] + F)
self.x = newvals
This function euler is not slow, but unfortunately, my code needs to make a very large number of calls to it. Is there a way I could code the time-stepping to make it run faster?
Many thanks.
There are at least two kinds of possible optimizations: working in a smarter way (algorithmic improvements) and working faster.
In the algorithmic side, you're using the Euler method, which is a first-order method (so global error is proportional to the step size) and has a smallish stability region. That is, it's not very efficient.
In the other side, if you're using the standard CPython implementation this kind of code is going to be quite slow. To get around that, you could simply try running it under PyPy. Its Just-in-Time compiler can make numerical code run maybe 100x faster. You could also write a custom C or Cython extension.
But there's a better way. Solving systems of ordinary differential equations is quite common, so scipy, one of the core scientific libraries in Python wraps fast, battle-tested Fortran libraries to solve them. By using scipy, you get both the algorithmic improvemnts (as integrators will have a higher order) and a fast implementation.
Solving the Lorenz 95 model for a set of perturbated initial conditions looks like this:
import numpy as np
def lorenz95(x, t):
return np.roll(x, 1) * (np.roll(x, -1) - np.roll(x, 2)) - x + F
if __name__ == '__main__':
import matplotlib.pyplot as plt
from scipy.integrate import odeint
SIZE = 40
F = 8
t = np.linspace(0, 10, 1001)
x0 = np.random.random(SIZE)
for perturbation in 0.1 * np.random.randn(5):
x0i = x0.copy()
x0i[0] += perturbation
x = odeint(lorenz95, x0i, t)
plt.plot(t, x[:, 0])
plt.show()
And the output (setting np.random.seed(7), yours can be different) is nicely chaotic. Small perturbations in the initial conditions (in just one of he coordinates!) produce very different solutions:
But, is it really faster than Euler time stepping? For dt = 0.01 it seems almost three times faster, but the solutions don't match except at the very beginning.
If dt is reduced, the solution provided by the Euler method gets increasingly similar to the odeint solution, but it takes much longer. Notice how the smaller dt, the later Euler solutions loose track of the odeint solution. The most precise Euler solution took 600x longer to computed the solution up to t=6 than odeint up to t=10. See the full script here.
In the end, this system is so unstable that I guess not even the odeint solution is accurate along all plotted time.