I want to numerically solve integrals that contain white noise.
Mathematically white noise can be described by a variable X(t), which is a random variable with a time average, Avg[X(t)] = 0 and the correlation function, Avg[X(t), X(t')] = delta_distribution(t-t').
A simple example would be to calculate the integral over X(t) from t=0 to t=1. On average this is of course zero, but what I need are different realizations of this integral.
The problem is that this does not work with numpy.integrate.quad().
Are there any packages for python that deal with stochastic integrals?
This is a good starting point for numerical SDE methods: http://math.gmu.edu/~tsauer/pre/sde.pdf.
Here is a simple numpy solver for the stochastic differential equation dX_t = a(t,X_t)dt + b(t,X_t)dW_t which I wrote for a class project last year. It is based on the forward euler method for regular differential equations, and in practice is fairly widely used when solving SDEs.
def euler_maruyama(a,b,x0,t):
N = len(t)
x = np.zeros((N,len(x0)))
x[0] = x0
for i in range(N-1):
dt = t[i+1]-t[i]
dWt = np.random.normal(0,dt)
x[i+1] = x[i] + a(t[i],x[i])*dt + b(t[i],x[i])*dWt
return x
Essentially, at each timestep, the deterministic part of the function is integrated using forward Euler, and the stochastic part is integrated by generating a normal random variable dWt with mean 0 and variance dt and integrating the stochastic part with respect to this.
The reason we generate dWt like this is based on the definition of Brownian motions. In particular, if $W$ is a Brownian motion, then $(W_t-W_s)$ is normally distributed with mean 0 and variance $t-s$. So dWt is a discritization of the change in $W$ over a small time interval.
This is a the docstring from the function above:
Parameters
----------
a : callable a(t,X_t),
t is scalar time and X_t is vector position
b : callable b(t,X_t),
where t is scalar time and X_t is vector position
x0 : ndarray
the initial position
t : ndarray
list of times at which to evaluate trajectory
Returns
-------
x : ndarray
positions of trajectory at each time in t
Related
I'm trying to implement a genetic algorithm in order to find the maxima of this function
f(x,y,z) = x^2 + y^3 + z^4 + xyz on [0,10] [0,20] [0,30] for x,y,z respectively
my objective function is the same as the function above. Each x,y,z of the population is represented by a binary string which is initialized randomly.
I've implemented it like this:
Roulette wheel selection
Single point crossover (no crossover rate)
Mutation: If the chromosome (x,y,z) is selected for mutation I flip a random bit of x, a random bit of y and a random bit of z.
I've noticed that giving the bigger z exponent, it is the first variable to converge. The rest of them have trouble converging as generations go by. Is this an acceptable behavior?
output
I want to find the integral of output power Po in the following code:
Vo = 54.6
# defining a function for duty cycle, output current and output power
def duty_cycle(output_voltage, array_voltage):
duty_cycle = np.divide(output_voltage, array_voltage)
return duty_cycle
def output_current(array_current, duty_cycle):
output_current = np.divide(array_current, duty_cycle)
return output_current
def output_power(output_voltage, output_current):
output_power = np.multiply(output_voltage, output_current)
return output_power
#calculating duty cycle, output current and output power
D = duty_cycle(Vo, array_params['arr_v_mp'])
Io = output_current(array_params['arr_i_mp'], D)
Po = output_power(Vo, Io)
#plot ouput power
plt.ylabel('Output Power [W]')
Po.plot(style='r-')
The code above is just a part of a script. array_params is a pandas time-series data frame. When plotted pandas Series Po, it looks like this:
This is my first time calculating integral using python. After reading through the internet, I think Python's scipy module could be of help but don't really know how and which method to implement. I would appreciate your help in any manner with the above-explained problem.
To compute an integral of the form int y(x) dx from x0 to x1, with an array x_array with values from x0 to x1 and a corresponding y_array of same length, one can use numpy's trapezoidal integration:
integral = np.trapz(y_array, x_array)
which will work also for non-constant spacing x_array[i+1]-x_array[i].
If an indefinite integral (i.e. an integral F(t) = integral f(t) dt) is needed, use scipy.integrate.cumtrapz (instead of numpy.trapz for definite integrals).
integrated = scipy.integrate.cumtrapz(power, dx=timestep)
or
integrated = scipy.integrate.cumtrapz(power, x=timevalues)
To have integrated the same length as power, specify the initial value of the integral, via the optional parameter initial (e.g. initial=0) to scipy.integrate.cumtrapz.
I was trying to use FiPy to solve a set of PDEs when I realized the command sweep was not working the way I thought it would. Here goes a sample with part of my code:
from pylab import *
import sys
from fipy import *
viscosity = 5.55555555556e-06
Pe =5.
pfi=100.
lfi=0.01
Ly=1.
Nx =200
Ny=100
Lx=Ly*Nx/Ny
dL=Ly/Ny
mesh = PeriodicGrid2DTopBottom(nx=Nx, ny=Ny, dx=dL, dy=dL)
x, y = mesh.cellCenters
xVelocity = CellVariable(mesh=mesh, hasOld=True, name='X velocity')
xVelocity.constrain(Pe, mesh.facesLeft)
xVelocity.constrain(Pe, mesh.facesRight)
rad=0.1
var1 = DistanceVariable(name='distance to center', mesh=mesh, value=numerix.sqrt((x-Nx*dL/2.)**2+(y-Ny*dL/2.)**2))
pi_fi= CellVariable(mesh=mesh, value=0.,name='Fluid-interface energy map')
pi_fi.setValue(pfi*exp(-1.*(var1-rad)/lfi), where=(var1 > rad) )
pi_fi.setValue(pfi, where=(var1 <= rad))
xVelocityEq = DiffusionTerm(coeff=viscosity) - ImplicitSourceTerm(pi_fi)
xres=10.
while (xres > 1.e-6) :
xVelocity.updateOld()
mySolver = LinearGMRESSolver(iterations=1000,tolerance=1.e-6)
xres = xVelocityEq.sweep(var=xVelocity,solver=mySolver)
print 'Result = ', xres
#Thats it
In short, I am declaring a function called xVelocityEq and solving it using sweep. Here is my output:
Result = 0.0007856742013190237
Result = 6.414470433257661e-07
As you can see, the while loop ends after two iterations. My first question is: why is my first residual error (=0.0007856742013190237) higher than the solver's tolerance? I thought that, since xVelocityEq corresponds to a linear system, solver tolerance and residual error would mean the same thing.
If I increase the no. of iterations in mySolver from 1000 to 10000, I get the following output:
Result = 0.0007856742013190237
Result = 2.4619110931978988e-09
Why did the second residual change, given that the first remained the same?
If I increase the tolerance in mySolver from 1.e-6 to 7.e-4, I get the following output:
Result = 0.0007856742013190237
Result = 6.414470433257661e-07
Note that these residuals are the same as in the first output. Now if I try to further increase the tolerance to 8.e-4, here's what I get as output:
Result = 0.0007856742013190237
Result = 0.0007856742013190237
Result = 0.0007856742013190237
Result = 0.0007856742013190237
Result = 0.0007856742013190237
...
At this point I was completely lost. Why the residuals have the same values for all solver tolerances smaller than 7.e-4? And why these residuals are constant and equal to 0.0007856742013190237 for solver tolerances higher than 7.e-4?
If I change the mySolver to LinearLUSolver (iterations=1000, tolerance=1.e-6), here's what I get:
Result = 0.0007856742013190237
Result = 1.6772757200988522e-18
Why in the world is my first residual the same as before, even though I have changed the solver?
why is my first residual error (=0.0007856742013190237) higher than the solver's tolerance?
The residual calculated by .sweep() is calculated before the solver is invoked to calculated a new solution vector. The matrix L and right-hand-side vector b are calculated based on the initial value of the solution vector x.
The residual is a measure of how well the current solution vector satisfies the non-linear PDE. The solver tolerance places a limit on how hard the solver should work to satisfy the linear system of equations discretized from the PDE.
Even if the PDE is linear (e.g., the diffusion coefficient is not a function of the solution variable), the initial value presumably doesn't solve the PDE, so the residual is large. After the solver is invoked, then x should solve the PDE, to within the solver tolerance. If the PDE is non-linear, then a well-converged solution to the linear algebra is still probably not a good solution to the PDE; that's what sweeping is for.
I thought that, since xVelocityEq corresponds to a linear system, solver tolerance and residual error would mean the same thing.
There wouldn't be any utility in keeping track of both. In addition to the residual being before the solve and the solver tolerance being used to terminate the solve, there are different normalizations that can be used and a lot of the solver documentation can be kind of sketchy. FiPy uses |L x - b|_2 as its residual. Solvers may normalize by the magnitude of b, the diagonal of L, or the phase of the moon, all of which can make it hard to directly compare the residual with the tolerance.
Why did the second residual change, given that the first remained the same?
By allowing 1000 iterations instead of 100, the solver was able to drive to a more exacting tolerance which, in turn, led to a smaller residual for the next sweep.
Why the residuals have the same values for all solver tolerances smaller than 7.e-4? And why these residuals are constant and equal to 0.0007856742013190237 for solver tolerances higher than 7.e-4?
Probably because the solver is failing and so not changing the value of the solution vector. Some solvers don't report this. In other cases, we should be doing a better job of reporting that fact to you.
Why in the world is my first residual the same as before, even though I have changed the solver?
The residual is not a property of the solver. It is a property of the discretized system of equations that approximates your PDE. Those linear algebra equations are then the input to the solver.
I am trying to find higher order derivatives of a dataset (x,y). x and y are 1D arrays of length N.
Let's say I generate them as :
xder0=np.linspace(0,10,1000)
yder0=np.sin(xder0)
I define the derivative function which takes in 2 array (x,y) and returns (x1, y1) where y1 is the derivative calculated at each index as : (y[i+1]-y[i])/(x[i+1]-x[i]). x1 is just the mean of x[i+1] and x[i]
Here is the function that does it:
def deriv(x,y):
delx =np.zeros((len(x)-1), dtype=np.longdouble)
ydiff=np.zeros((len(x)-1), dtype=np.longdouble)
for i in range(len(x)-1):
delx[i] =(x[i+1]+x[i])/2.0
ydiff[i] =(y[i+1]-y[i])/(x[i+1]-x[i])
return delx, ydiff
Now to calculate the first derivative, I call this function as:
xder1, yder1 = deriv(xder0, yder0)
Similarly for second derivative, I call this function giving first derivatives as input:
xder2, yder2 = deriv(xder1, yder1)
And it goes on:
xder3, yder3 = deriv(xder2, yder2)
xder4, yder4 = deriv(xder3, yder3)
xder5, yder5 = deriv(xder4, yder4)
xder6, yder6 = deriv(xder5, yder5)
xder7, yder7 = deriv(xder6, yder6)
xder8, yder8 = deriv(xder7, yder7)
xder9, yder9 = deriv(xder8, yder8)
Something peculiar happens after I reach order 7. The 7th order becomes very noisy! Earlier derivatives are all either sine or cos functions as expected. However 7th order is a noisy sine. And hence all derivatives after that blow up.
Any idea what is going on?
This is a well known stability issue with numerical interpolation using equally-spaced points. Read the answers at http://math.stackexchange.com.
To overcome this problem you have to use non-equally-spaced points, like the roots of Lagendre polynomial. The instability occurs due to the unavailability of information at the boundaries, thus more concentration of points at the boundaries is required, as per the roots of say Lagendre polynomials or others with similar properties such as Chebyshev polynomial.
I want to solve this kind of problem:
dy/dt = 0.01*y*(1-y), find t when y = 0.8 (0<t<3000)
I've tried the ode function in Python, but it can only calculate y when t is given.
So are there any simple ways to solve this problem in Python?
PS: This function is just a simple example. My real problem is so complex that can't be solve analytically. So I want to know how to solve it numerically. And I think this problem is more like an optimization problem:
Objective function y(t) = 0.8, Subject to dy/dt = 0.01*y*(1-y), and 0<t<3000
PPS: My real problem is:
objective function: F(t) = 0.85,
subject to: F(t) = sqrt(x(t)^2+y(t)^2+z(t)^2),
x''(t) = (1/F(t)-1)*250*x(t),
y''(t) = (1/F(t)-1)*250*y(t),
z''(t) = (1/F(t)-1)*250*z(t)-10,
x(0) = 0, y(0) = 0, z(0) = 0.7,
x'(0) = 0.1, y'(0) = 1.5, z'(0) = 0,
0<t<5
This differential equation can be solved analytically quite easily:
dy/dt = 0.01 * y * (1-y)
rearrange to gather y and t terms on opposite sides
100 dt = 1/(y * (1-y)) dy
The lhs integrates trivially to 100 * t, rhs is slightly more complicated. We can always write a product of two quotients as a sum of the two quotients * some constants:
1/(y * (1-y)) = A/y + B/(1-y)
The values for A and B can be worked out by putting the rhs on the same denominator and comparing constant and first order y terms on both sides. In this case it is simple, A=B=1. Thus we have to integrate
1/y + 1/(1-y) dy
The first term integrates to ln(y), the second term can be integrated with a change of variables u = 1-y to -ln(1-y). Our integrated equation therefor looks like:
100 * t + C = ln(y) - ln(1-y)
not forgetting the constant of integration (it is convenient to write it on the lhs here). We can combine the two logarithm terms:
100 * t + C = ln( y / (1-y) )
In order to solve t for an exact value of y, we first need to work out the value of C. We do this using the initial conditions. It is clear that if y starts at 1, dy/dt = 0 and the value of y never changes. Thus plug in the values for y and t at the beginning
100 * 0 + C = ln( y(0) / (1 - y(0) )
This will give a value for C (assuming y is not 0 or 1) and then use y=0.8 to get a value for t. Note that because of the logarithm and the factor 100 multiplying t y will reach 0.8 within a relatively short range of t values, unless the initial value of y is incredibly small. It is of course also straightforward to rearrange the equation above to express y in terms of t, then you can plot the function as well.
Edit: Numerical integration
For a more complexed ODE which cannot be solved analytically, you will have to try numerically. Initially we only know the value of the function at zero time y(0) (we have to know at least that in order to uniquely define the trajectory of the function), and how to evaluate the gradient. The idea of numerical integration is that we can use our knowledge of the gradient (which tells us how the function is changing) to work out what the value of the function will be in the vicinity of our starting point. The simplest way to do this is Euler integration:
y(dt) = y(0) + dy/dt * dt
Euler integration assumes that the gradient is constant between t=0 and t=dt. Once y(dt) is known, the gradient can be calculated there also and in turn used to calculate y(2 * dt) and so on, gradually building up the complete trajectory of the function. If you are looking for a particular target value, just wait until the trajectory goes past that value, then interpolate between the last two positions to get the precise t.
The problem with Euler integration (and with all other numerical integration methods) is that its results are only accurate when its assumptions are valid. Because the gradient is not constant between pairs of time points, a certain amount of error will arise for each integration step, which over time will build up until the answer is completely inaccurate. In order to improve the quality of the integration, it is necessary to use more sophisticated approximations to the gradient. Check out for example the Runge-Kutta methods, which are a family of integrators which remove progressive orders of error term at the cost of increased computation time. If your function is differentiable, knowing the second or even third derivatives can also be used to reduce the integration error.
Fortunately of course, somebody else has done the hard work here, and you don't have to worry too much about solving problems like numerical stability or have an in depth understanding of all the details (although understanding roughly what is going on helps a lot). Check out http://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.ode.html#scipy.integrate.ode for an example of an integrator class which you should be able to use straightaway. For instance
from scipy.integrate import ode
def deriv(t, y):
return 0.01 * y * (1 - y)
my_integrator = ode(deriv)
my_integrator.set_initial_value(0.5)
t = 0.1 # start with a small value of time
while t < 3000:
y = my_integrator.integrate(t)
if y > 0.8:
print "y(%f) = %f" % (t, y)
break
t += 0.1
This code will print out the first t value when y passes 0.8 (or nothing if it never reaches 0.8). If you want a more accurate value of t, keep the y of the previous t as well and interpolate between them.
As an addition to Krastanov`s answer:
Aside of PyDSTool there are other packages, like Pysundials and Assimulo which provide bindings to the solver IDA from Sundials. This solver has root finding capabilites.
Use scipy.integrate.odeint to handle your integration, and analyse the results afterward.
import numpy as np
from scipy.integrate import odeint
ts = np.arange(0,3000,1) # time series - start, stop, step
def rhs(y,t):
return 0.01*y*(1-y)
y0 = np.array([1]) # initial value
ys = odeint(rhs,y0,ts)
Then analyse the numpy array ys to find your answer (dimensions of array ts matches ys). (This may not work first time because I am constructing from memory).
This might involve using the scipy interpolate function for the ys array, such that you get a result at time t.
EDIT: I see that you wish to solve a spring in 3D. This should be fine with the above method; Odeint on the scipy website has examples for systems such as coupled springs that can be solved for, and these could be extended.
What you are asking for is a ODE integrator with root finding capabilities. They exist and the low-level code for such integrators is supplied with scipy, but they have not yet been wrapped in python bindings.
For more information see this mailing list post that provides a few alternatives: http://mail.scipy.org/pipermail/scipy-user/2010-March/024890.html
You can use the following example implementation which uses backtracking (hence it is not optimal as it is a bolt-on addition to an integrator that does not have root finding on its own): https://github.com/scipy/scipy/pull/4904/files