Integration on python - python

hi i have been given a question by my lecturer to integrate a function through python and he gave us very little information. the boundaries are +infinity and -infinity and the function is
(cos a*x) * (e**-x**2)
so far I have
def gauss_cosine(a, n):
sum=0.0
dx = ((math.cosine(a*x)*math.exp(-x**2)))
return
for k in range (0,n):
x=a+k*dx
sum=sum+f(x)
return dx*sum
not sure if this is right at all.
kind regards

I don't see it recommended much on this site, but you could try sympy:
In [1]: import sympy as sp
In [2]: x, a = sp.symbols(('x', 'a'))
In [3]: f = sp.cos(a*x) * sp.exp(-x**2)
In [4]: res = sp.integrate(f, (x, -sp.oo, sp.oo))
In [5]: res
Out[5]: sqrt(pi) * exp
In [6]: sp.pprint(res)
Out[6]:
2
-a
────
___ 4
╲╱ π ⋅ℯ
For numerical integration, try the scipy package.

Well, your integral has an analytical solution, and you can calculate it with sympy, as #Bill pointed out, +1.
However, what I think was the point of the question is how to numerically calculate this integral, and this is what I discuss here.
The integrand is even. We reduce the domain to [0,+inf], and will multiply by 2 the result.
We still have an oscillatory integral on an unbounded domain. This is often a nasty beast, but we know that it is convergent, and well behaved at +- inf. In other words, the exp(-x**2) decays to zero fast enough.
The trick is to change variable of integration, x=tan(t), so that dx=(1+x**2)dt. The domain becomes [0,pi/2], it is bounded and the numerical integration is then a piece of cake.
Example with the simpson's rule from scipy, with a=2. With just 100 discretization points we have a 5 digits precision!
from scipy.integrate import simps
from numpy import seterr, pi, sqrt, linspace, tan, cos, exp
N = 100
a = 2.
t = linspace(0, pi / 2, N)
x = tan(t)
f = cos(a * x) * exp(-x ** 2) * (1 + x ** 2)
print "numerical solution = ", 2 * simps(f, t)
print "analytical solution = ",sqrt(pi) * exp(-a ** 2 / 4)

Your computer will have a very hard time representing those boundary limits.
Start by plotting your function.
It also helps to know the answer before you start.
I'd recommend breaking it into two integrals: one from minus-infinity to zero and another from zero to plus-infinity. As noted by flebool below, it's an even function. Make sure you know what that means and the implications for your solution.
Next you'll need an integration scheme that can deal with boundary conditions at infinity. Look for a log quadrature scheme.
A naive Euler integration would not be my first thought.

Related

Derivatives in python

I am trying to find the coefficients of a finite series, $f(x) = \sum_n a_nx^n$. To get the $m$th coefficient, we can take the $m$th derivative evaluated at zero. Therefore, the $m$th coefficient is
$$
a_n = \frac{1}{2\pi i } \oint_C \frac{f(z)}{z^{n+1}} dz
$$
I believe this code takes the derivative of a function using the above contour integral.
import math
import numpy
import matplotlib.pyplot as plt
def F(x):
mean=10
return math.exp(mean*(x.real-1))
def p(n):
mean=10
return (math.pow(mean, n) * math.exp(-mean)) / math.factorial(n)
def integration(func, a, n, r, n_steps):
z = r * numpy.exp(2j * numpy.pi * numpy.arange(0, 1, 1. / n_steps))
return math.factorial(n) * numpy.mean(func(a + z) / z**n)
ns = list(range(20))
f2 = numpy.vectorize(F)
plt.plot(ns,[p(n) for n in ns], label='Actual')
plt.plot(ns,[integration(f2, a=0., n=n, r=1., n_steps=100).real/math.factorial(n) for n in ns], label='Numerical derivative')
plt.legend()
However, it is clear that the numerical derivative is completely off the actual values of the coefficients of the series. What am I doing wrong?
The formulas in the Mathematics Stack Exchange answer that you're using to derive the coefficients of the power series expansion of F are based on complex analysis - coming for example from Cauchy's residue theorem (though other derivations are possible). One of the assumptions necessary to make those formulas work is that you have a holomorphic (i.e., complex differentiable) function.
Your definition of F gives a function that's not holomorphic. (For one thing, it always gives a real result for any complex input, which isn't possible for a non-constant holomorphic function.) But it's easily fixed to be holomorphic, while continuing to return the same result for real inputs.
Here's a fixed version of F, which replaces x.real with x. Since the input to exp is now complex, it's also necessary to use cmath.exp instead of math.exp to avoid a TypeError:
import cmath
def F(x):
mean=10
return cmath.exp(mean*(x-1))
After that fix for F, if I run your code I get rather surprisingly accurate results. Here's the graph that I get. (I had to print out the values to double check that that graph really did show two lines on top of one another.)

Solving a non-linear equation in python: the answer is the same as initial guess

So I have this complicated equation which I need to solve. I think that finally x should be of order 1E22. But the problem with this code is that it crashes my entire system. Is there a fix? I tried scipy.optimize.root but it doesn't really solve anything at this order of magnitude (it gives final answer as initial guess without any iteration).
from scipy.optimize import fsolve
import math
import mpmath
import scipy
import sympy
from sympy.solvers import solve
from sympy import Symbol
from sympy import sqrt,exp
x = Symbol('x',positive=True)
cs = 507.643E-12
esi = 1.05E-10
q = 1.6E-19
T = 300
k = 1.381E-23
ni = 1.45E16
print(solve(exp(x/((2*cs/(esi*q))**2)) - ((x/ni)**(esi*k*T)),x))
def func(N):
return (math.exp(N/math.pow(2*cs/(esi*q),2)) - math.pow(N/ni,(esi*k*T)))
n_initial_guess = 1E21
n_solution = fsolve(func, n_initial_guess)
print ("The solution is n = %f" % n_solution)
print ("at which the value of the expression is %f" % func(n_solution))
print(scipy.optimize.root(func, 1E22,tol=1E-10))
Neither of the scipy functions work. The sympy function crashes my laptop. Would Matlab be ideal for this?
Numeric solution with SciPy
The problem that SciPy has with this equation is the loss of significance. You are raising N to the tiny power esi*k*T which makes it very near 1; in floating point arithmetics, it becomes exactly 1. Similarly, the part coming from the exponential becomes 1. Then the two parts are subtracting, leaving 0 - equation appears to be already solved. You could have seen this happening by printing func(1E21) -- it returns 0.
The way to deal with the loss of significance is to rewrite the equation, from the original form
exp(x/((2*cs/(esi*q))**2)) == (x/ni)**(esi*k*T)
by raising both sides to the power 1/(esi*k*T):
exp(x*esi*q**2/(2*cs*k*T)**2)) == x/ni
So func becomes
def func(N):
return np.exp(N*esi*q**2/(k*T*(2*cs)**2)) - (N/ni)
(It's is advisable to use NumPy functions with SciPy solvers.) That said, the solvers, for example root(func, 1E10), will report being unable to converge to a solution.
Symbolic solution with SymPy
SymPy is for solving equations analytically. It does not care for a bunch of floating point numbers. Give it a symbolic equation instead:
x, a, b, c = symbols('x, a, b, c', positive=True)
sol = solve(exp(x/a) - (x/b)**c, x)[0]
The solution is obtained as -c*LambertW(-a/(b*c))/a. Then it can be evaluated:
cs = 507.643E-12
esi = 1.05E-10
q = 1.6E-19
T = 300
k = 1.381E-23
ni = 1.45E16
print(sol.evalf(subs={a: (2*cs/(esi*q))**2, b: ni, c: esi*k*T}))
Which prints -21301663061.0653 - 4649834682.69762*I confirming what one would already expect from the failure of convergence with SciPy: there are no real solutions of the equation.

How to calculate derivative of moment generating function in python?

Here is my code so far, I thought I could use scipy but it doesn't give me the right answer for the second derivative, moment(0, 2). My guess is that I'm not applying scipy.misc.derivative correctly and that I should use diffs_exp from sympy but I couldn't get that to work either..
from scipy import misc
import numpy as np
def mgf(s):
mu = 2
sigma = 0.5
mgf = np.exp(mu*s + ((sigma**2)*(s**2))/2)
return mgf
def moment(s, i):
mo = misc.derivative(mgf, s, dx=0.000000001, n=i)
return mo
moment(s, i) evaluates correctly when i=1 but not when i>1. moment(0,2) should equal sigma^2 or .25 but the function returns 0.0 currently
the function will only be evaluated when s=0, the more important part is that the differentiation is correct.
Here's how one would do it symbolically with sympy and numerically evaluate the result for a particular mu, sigma and s
In [1]: from sympy import *
In [2]: mu, sigma, s = symbols("mu sigma s")
In [3]: expr = exp(mu*s+(sigma*s)**2/2)
In [4]: f = lambdify((mu, sigma, s), expr.diff(s, 2))
In [5]: f(2, 0.5, 0)
Out[5]: 4.25
Choosing a good step size for a finite difference scheme is a tricky businness. Too small step and you're doomed because of the round-off error (as you've found). Too large step, and the scheme is too coarse (as you've found as well). scipy.misc.derivative's default step is not very useful, BTW. There is some literature on how to choose a sensible step. Eg, Numerical Recipes has a brief introduction to a simple scheme.
In this particular case finding a sensible step is reasonably easy:
In [41]: from scipy.misc import derivative
In [42]: def f(x):
....: arg = 2.*x + (0.5*x)**2 / 2.
....: return np.exp(arg)
....:
In [53]: derivative(f, 0., dx=1e-5, n=2)
Out[53]: 4.2499981312005266
An alternative is to use a package which does a smarter step size selection (one keyword for internet/literature searches is Romberg extrapolation). For example, numdifftools:
In [57]: import numdifftools as nd
In [59]: fdd = nd.Derivative(f, n=2)
In [60]: fdd(0)
Out[60]: array([ 4.25])
So I mucked around in the source code after misreading the question in the earlier answer(since deleted).
Scipy misc.derivative calculates second order derivative by default as
lim h->0 (f(x+h)-2*f(x)+f(x-h))/h^2
The problem here occurs because the output of np.exp() is float64, which is of limited precision i.e 52 bits for mantissa and 11 for exponent. When we decrease dx, the difference between the terms occurs in higher order digits, which due to limited precision are not present. On summation this vanishes to zero.
For reference, in the above function, the values are
0.999999998, 1, 1.000000002 for f(x+h),f(x),f(x-h) respectively with x=0 and h =1e-9. The solution could be using either function that have larger precision, but that would involve changing scipy source code and is not a small undertaking.(Python pow functions are not arbitrary precision)
Other(practical) option is to use smaller values for dx. dx=1e-2 seems to actually give close enough answer i.e 4.2501848996 as compared to 4.25 which is the actual second derivative.

On ordinary differential equations (ODE) and optimization, in Python

I want to solve this kind of problem:
dy/dt = 0.01*y*(1-y), find t when y = 0.8 (0<t<3000)
I've tried the ode function in Python, but it can only calculate y when t is given.
So are there any simple ways to solve this problem in Python?
PS: This function is just a simple example. My real problem is so complex that can't be solve analytically. So I want to know how to solve it numerically. And I think this problem is more like an optimization problem:
Objective function y(t) = 0.8, Subject to dy/dt = 0.01*y*(1-y), and 0<t<3000
PPS: My real problem is:
objective function: F(t) = 0.85,
subject to: F(t) = sqrt(x(t)^2+y(t)^2+z(t)^2),
x''(t) = (1/F(t)-1)*250*x(t),
y''(t) = (1/F(t)-1)*250*y(t),
z''(t) = (1/F(t)-1)*250*z(t)-10,
x(0) = 0, y(0) = 0, z(0) = 0.7,
x'(0) = 0.1, y'(0) = 1.5, z'(0) = 0,
0<t<5
This differential equation can be solved analytically quite easily:
dy/dt = 0.01 * y * (1-y)
rearrange to gather y and t terms on opposite sides
100 dt = 1/(y * (1-y)) dy
The lhs integrates trivially to 100 * t, rhs is slightly more complicated. We can always write a product of two quotients as a sum of the two quotients * some constants:
1/(y * (1-y)) = A/y + B/(1-y)
The values for A and B can be worked out by putting the rhs on the same denominator and comparing constant and first order y terms on both sides. In this case it is simple, A=B=1. Thus we have to integrate
1/y + 1/(1-y) dy
The first term integrates to ln(y), the second term can be integrated with a change of variables u = 1-y to -ln(1-y). Our integrated equation therefor looks like:
100 * t + C = ln(y) - ln(1-y)
not forgetting the constant of integration (it is convenient to write it on the lhs here). We can combine the two logarithm terms:
100 * t + C = ln( y / (1-y) )
In order to solve t for an exact value of y, we first need to work out the value of C. We do this using the initial conditions. It is clear that if y starts at 1, dy/dt = 0 and the value of y never changes. Thus plug in the values for y and t at the beginning
100 * 0 + C = ln( y(0) / (1 - y(0) )
This will give a value for C (assuming y is not 0 or 1) and then use y=0.8 to get a value for t. Note that because of the logarithm and the factor 100 multiplying t y will reach 0.8 within a relatively short range of t values, unless the initial value of y is incredibly small. It is of course also straightforward to rearrange the equation above to express y in terms of t, then you can plot the function as well.
Edit: Numerical integration
For a more complexed ODE which cannot be solved analytically, you will have to try numerically. Initially we only know the value of the function at zero time y(0) (we have to know at least that in order to uniquely define the trajectory of the function), and how to evaluate the gradient. The idea of numerical integration is that we can use our knowledge of the gradient (which tells us how the function is changing) to work out what the value of the function will be in the vicinity of our starting point. The simplest way to do this is Euler integration:
y(dt) = y(0) + dy/dt * dt
Euler integration assumes that the gradient is constant between t=0 and t=dt. Once y(dt) is known, the gradient can be calculated there also and in turn used to calculate y(2 * dt) and so on, gradually building up the complete trajectory of the function. If you are looking for a particular target value, just wait until the trajectory goes past that value, then interpolate between the last two positions to get the precise t.
The problem with Euler integration (and with all other numerical integration methods) is that its results are only accurate when its assumptions are valid. Because the gradient is not constant between pairs of time points, a certain amount of error will arise for each integration step, which over time will build up until the answer is completely inaccurate. In order to improve the quality of the integration, it is necessary to use more sophisticated approximations to the gradient. Check out for example the Runge-Kutta methods, which are a family of integrators which remove progressive orders of error term at the cost of increased computation time. If your function is differentiable, knowing the second or even third derivatives can also be used to reduce the integration error.
Fortunately of course, somebody else has done the hard work here, and you don't have to worry too much about solving problems like numerical stability or have an in depth understanding of all the details (although understanding roughly what is going on helps a lot). Check out http://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.ode.html#scipy.integrate.ode for an example of an integrator class which you should be able to use straightaway. For instance
from scipy.integrate import ode
def deriv(t, y):
return 0.01 * y * (1 - y)
my_integrator = ode(deriv)
my_integrator.set_initial_value(0.5)
t = 0.1 # start with a small value of time
while t < 3000:
y = my_integrator.integrate(t)
if y > 0.8:
print "y(%f) = %f" % (t, y)
break
t += 0.1
This code will print out the first t value when y passes 0.8 (or nothing if it never reaches 0.8). If you want a more accurate value of t, keep the y of the previous t as well and interpolate between them.
As an addition to Krastanov`s answer:
Aside of PyDSTool there are other packages, like Pysundials and Assimulo which provide bindings to the solver IDA from Sundials. This solver has root finding capabilites.
Use scipy.integrate.odeint to handle your integration, and analyse the results afterward.
import numpy as np
from scipy.integrate import odeint
ts = np.arange(0,3000,1) # time series - start, stop, step
def rhs(y,t):
return 0.01*y*(1-y)
y0 = np.array([1]) # initial value
ys = odeint(rhs,y0,ts)
Then analyse the numpy array ys to find your answer (dimensions of array ts matches ys). (This may not work first time because I am constructing from memory).
This might involve using the scipy interpolate function for the ys array, such that you get a result at time t.
EDIT: I see that you wish to solve a spring in 3D. This should be fine with the above method; Odeint on the scipy website has examples for systems such as coupled springs that can be solved for, and these could be extended.
What you are asking for is a ODE integrator with root finding capabilities. They exist and the low-level code for such integrators is supplied with scipy, but they have not yet been wrapped in python bindings.
For more information see this mailing list post that provides a few alternatives: http://mail.scipy.org/pipermail/scipy-user/2010-March/024890.html
You can use the following example implementation which uses backtracking (hence it is not optimal as it is a bolt-on addition to an integrator that does not have root finding on its own): https://github.com/scipy/scipy/pull/4904/files

Calculating the area underneath a mathematical function

I have a range of data that I have approximated using a polynomial of degree 2 in Python. I want to calculate the area underneath this polynomial between 0 and 1.
Is there a calculus, or similar package from numpy that I can use, or should I just make a simple function to integrate these functions?
I'm a little unclear what the best approach for defining mathematical functions is.
Thanks.
If you're integrating only polynomials, you don't need to represent a general mathematical function, use numpy.poly1d, which has an integ method for integration.
>>> import numpy
>>> p = numpy.poly1d([2, 4, 6])
>>> print p
2
2 x + 4 x + 6
>>> i = p.integ()
>>> i
poly1d([ 0.66666667, 2. , 6. , 0. ])
>>> integrand = i(1) - i(0) # Use call notation to evaluate a poly1d
>>> integrand
8.6666666666666661
For integrating arbitrary numerical functions, you would use scipy.integrate with normal Python functions for functions. For integrating functions analytically, you would use sympy. It doesn't sound like you want either in this case, especially not the latter.
Look, Ma, no imports!
>>> coeffs = [2., 4., 6.]
>>> sum(coeff / (i+1) for i, coeff in enumerate(reversed(coeffs)))
8.6666666666666661
>>>
Our guarantee: Works for a polynomial of any positive degree or your money back!
Update from our research lab: Guarantee extended; s/positive/non-negative/ :-)
Update Here's the industrial-strength version that is robust in the face of stray ints in the coefficients without having a function call in the loop, and uses neither enumerate() nor reversed() in the setup:
>>> icoeffs = [2, 4, 6]
>>> tot = 0.0
>>> divisor = float(len(icoeffs))
>>> for coeff in icoeffs:
... tot += coeff / divisor
... divisor -= 1.0
...
>>> tot
8.6666666666666661
>>>
It might be overkill to resort to general-purpose numeric integration algorithms for your special case...if you work out the algebra, there's a simple expression that gives you the area.
You have a polynomial of degree 2: f(x) = ax2 + bx + c
You want to find the area under the curve for x in the range [0,1].
The antiderivative F(x) = ax3/3 + bx2/2 + cx + C
The area under the curve from 0 to 1 is: F(1) - F(0) = a/3 + b/2 + c
So if you're only calculating the area for the interval [0,1], you might consider
using this simple expression rather than resorting to the general-purpose methods.
'quad' in scipy.integrate is the general purpose method for integrating functions of a single variable over a definite interval. In a simple case (such as the one described in your question) you pass in your function and the lower and upper limits, respectively. 'quad' returns a tuple comprised of the integral result and an upper bound on the error term.
from scipy import integrate as TG
fnx = lambda x: 3*x**2 + 9*x # some polynomial of degree two
aoc, err = TG.quad(fnx, 0, 1)
[Note: after i posted this i an answer posted before mine, and which represents polynomials using 'poly1d' in Numpy. My scriptlet just above can also accept a polynomial in this form:
import numpy as NP
px = NP.poly1d([2,4,6])
aoc, err = TG.quad(px, 0, 1)
# returns (8.6666666666666661, 9.6219328800846896e-14)
If one is integrating quadratic or cubic polynomials from the get-go, an alternative to deriving the explicit integral expressions is to use Simpson's rule; it is a deep fact that this method exactly integrates polynomials of degree 3 and lower.
To borrow Mike Graham's example (I haven't used Python in a while; apologies if the code looks wonky):
>>> import numpy
>>> p = numpy.poly1d([2, 4, 6])
>>> print p
2
2 x + 4 x + 6
>>> integrand = (1 - 0)(p(0) + 4*p((0 + 1)/2) + p(1))/6
uses Simpson's rule to compute the value of integrand. You can verify for yourself that the method works as advertised.
Of course, I did not simplify the expression for integrand to indicate that the 0 and 1 can be replaced with arbitrary values u and v, and the code will still work for finding the integral of the function from u to v.

Categories