I want to integrate the following double integral:
I want to use the dblquad method from the scipy.integrate package, which allows you to do double integrals with limits of the inner integral as a function of the outer integral variable:
import scipy.integrate as spi
import numpy as np
x_limit = 0
y_limit = lambda x: np.arccos(np.cos(x))
integrand = lambda x, y: np.exp(-(2+np.cos(x)-np.cos(y)))
low_limit_y = 0 # inner integral
up_limit_y = y_limit
low_limit_x = x_limit # outer integral
up_limit_x = 2*np.pi-x_limit
integral = spi.dblquad(integrand, low_limit_x, up_limit_x, low_limit_y, up_limit_y)
print(integral)
Output:
(0.6934912861906996, 2.1067956428653226e-12)
The code runs, but does not give me the right answer. Using Wolfram Alpha I get the right answer: 3.58857
Wolfram Alpha method
The only thing I've noticed is that the values from the two methods agree when the signs on the cosines are switched from + to - and vice versa:
Wolfram Alpha method with signs on the cosines swapped
However, I have no plausible reason for why this should be the case. Does anyone have any clue what is going on here? I can separate the function out into the inner integral looping over all values of x and then summing the results which gives the right answer, but that is really quite slow.
Take another look at the docstring of dblquad; it says
Return the double (definite) integral of ``func(y, x)`` from ``x = a..b``
and ``y = gfun(x)..hfun(x)``.
Note the order of arguments of func(y, x): y first, then x.
If you change your definition of integrand to
integrand = lambda y, x: np.exp(-(2+np.cos(x)-np.cos(y)))
you get the expected answer. That is also (in effect) what you did when you changed the signs of the cos terms in the integrand.
(You're not the first one to get tripped up by the expected order of the arguments to func.)
Related
I have a function represented as a narray i. e. y = f(x), where y and x are two narrays.
I am searching for a method that find the roots of f(x).
Reading the scipy documentation, I was able to find just methods that works on user defined functions, like scipy.optimize.root_scalar. I thought about using scipy.interpolate.interp1d to get an interpolated version of my function to be used in scipy.optimize.root_scalar, but I'm not sure it can work and it seems pretty complicated.
Is it there some other function that I can use instead?
You have to interpolate a function defined by numpy arrays as all the solvers require a function that can return a value for any input x, not just those in your array. But this is not complicated, here is an example
from scipy import optimize
from scipy import interpolate
# our xs and ys
xs = np.array([0,2,5])
ys = np.array([-3,-1,2])
# interpolated function
f = interpolate.interp1d(xs, ys)
sol = optimize.root_scalar(f, bracket = [xs[0],xs[-1]])
print(f'root is {sol.root}')
# check
f0 = f(sol.root)
print(f'value of function at the root: f({sol.root})={f0}')
output:
root is 3.0
value of function at the root: f(3.0)=0.0
You may also want to interpolate with higher-degree polynomials for higher accuracy of your root-finding, eg How to perform cubic spline interpolation in python?
I am trying to find the coefficients of a finite series, $f(x) = \sum_n a_nx^n$. To get the $m$th coefficient, we can take the $m$th derivative evaluated at zero. Therefore, the $m$th coefficient is
$$
a_n = \frac{1}{2\pi i } \oint_C \frac{f(z)}{z^{n+1}} dz
$$
I believe this code takes the derivative of a function using the above contour integral.
import math
import numpy
import matplotlib.pyplot as plt
def F(x):
mean=10
return math.exp(mean*(x.real-1))
def p(n):
mean=10
return (math.pow(mean, n) * math.exp(-mean)) / math.factorial(n)
def integration(func, a, n, r, n_steps):
z = r * numpy.exp(2j * numpy.pi * numpy.arange(0, 1, 1. / n_steps))
return math.factorial(n) * numpy.mean(func(a + z) / z**n)
ns = list(range(20))
f2 = numpy.vectorize(F)
plt.plot(ns,[p(n) for n in ns], label='Actual')
plt.plot(ns,[integration(f2, a=0., n=n, r=1., n_steps=100).real/math.factorial(n) for n in ns], label='Numerical derivative')
plt.legend()
However, it is clear that the numerical derivative is completely off the actual values of the coefficients of the series. What am I doing wrong?
The formulas in the Mathematics Stack Exchange answer that you're using to derive the coefficients of the power series expansion of F are based on complex analysis - coming for example from Cauchy's residue theorem (though other derivations are possible). One of the assumptions necessary to make those formulas work is that you have a holomorphic (i.e., complex differentiable) function.
Your definition of F gives a function that's not holomorphic. (For one thing, it always gives a real result for any complex input, which isn't possible for a non-constant holomorphic function.) But it's easily fixed to be holomorphic, while continuing to return the same result for real inputs.
Here's a fixed version of F, which replaces x.real with x. Since the input to exp is now complex, it's also necessary to use cmath.exp instead of math.exp to avoid a TypeError:
import cmath
def F(x):
mean=10
return cmath.exp(mean*(x-1))
After that fix for F, if I run your code I get rather surprisingly accurate results. Here's the graph that I get. (I had to print out the values to double check that that graph really did show two lines on top of one another.)
Is sympy able to compute fourier transforms quickly or do I need another software? Because I tried many things . I am pretty inexperienced in python and Sympy though . Is there a quicker way to do these Fourier transforms with python?
(a +j \omega) / (4a^2 + (a+j \omega)^2) )
I tried as such
from sympy import poly, pi, I, pow
from sympy.abc import a,f, n, k
n = poly(a**2 + t**2)
k = t / n
fourier_transform(k , t, 2*pi*f)
And I get this message:
And another example from Python 3.9 shell :
So what about the actual result?
1
Currently, if the Fourier transform returns a PieceWise expression, it throws a fuss and just returns an unevaluated expression instead. So SymPy can do the integral, it just isn't in a very nice form. Here is the result after manual integration.
You can use this function on your own to attain this result if you want this kind of answer.
def my_fourier_transform(f, t, s):
return integrate(f*exp(-2*S.Pi*I*t*s), (t, -oo, oo)).simplify()
One can see that the long condition has arg(a) and arg(s) which should be sign(a) and sign(s) respectively (assuming they are real-valued). After doing a similar thing in Wolfram Alpha, I think this is a correct result. It is just not a very nice one.
But I found a trick that helps. If SymPy struggles to simplify something, usually giving it stronger assumptions is the way to go if your main goal is just to get an answer. A lot of the time, the answer is still correct even if the assumptions don't hold. So we make the variables positive.
Note that SymPy does the transform differently to Wolfram Alpha as noted in the comment below. This was why my third argument is different.
from sympy import *
a, s, t = symbols('a s t', positive=True)
f = t / (a**2 + t**2)
# Wolfram computes integral(f(t)*exp(I*t*w))
# SymPy computes integral(f(t)*exp(-2*pi*I*s*t))
# So w = -2*pi*s
print(fourier_transform(f, t, -s))
# I*pi*exp(-2*pi*a*s)
2
Correct me if I'm wrong but according to Wikipedia:
So the Fourier transform of the Dirac Delta is 1.
3
Using the my_fourier_transfrom defined above, we get
The first condition is always false. This is probably a failure on SymPy's part since this integral diverges. I assume it's because it can't decide whether it is oo, -oo or zoo.
I studying how to solve differential equations in Python with odeint and for test, I try to solve the following ODE (the following example came from of https://apmonitor.com/pdc/index.php/Main/SolveDifferentialEquations):
# first import the necessary libraries
import numpy as np
from scipy.integrate import odeint
# function that returns dy/dt
def model(y,t):
k = 0.3
dydt = -k*y
return dydt
#Initial condition
y0 = 5.0
# Time points
t = np.linspace(0,20)
# Solve ODE
def y(t):
return odeint(model,y0,t)
So if I plot the results with matplotlib, or more simply, give the command print(y(t)) then this work perfectly! But if I try compute the value of the function for a fixed value of time, for instance, t1 = t[2] ( = 0.8163 ) so I get the error
t1 = t[2]
print(y(t1))
ValueError("diff requires input that is at least one dimensional")
why I only can compute the value for y(t) for a interval t = np.linspace(0,20) but not for a number in this interval? There is some manner to fix this?
Thank you very much.
The odeint function solves you differential equation numerically. To do that you need to specify the points where you want your solution to be evaluated. These points also influence the accuracy of the solution. Generally, the more points you give to odeint the better the result (when solving the same time interval).
This means that there is no way for odeint to know what accuracy you want if you only supply a single time at which you want to evaluate the function. Instead you always need to supply a range of times (like you did with np.linspace). odeint then returns the value of the solution at all these times.
y(t) is an array of values of your solution and the third value in the array corresponds to the solution at the third time in t:
The solution evaluated at t[0] is y(t)[0] = y0
The solution evaluated at t[1] is y(t)[1]
The solution evaluated at t[2] is y(t)[2]
...
So instead of
print(y(t[2]))
you need to use
print(y(t)[2])
Sometimes, I get wrong solution when I integrate with infinite boundaries in Python. Here is a simple example to illustrate my confusion:
from scipy import *
import numpy as np
from scipy.integrate import quad
def integrand1(x):
output = exp(-(x-1.0)**2.0)
return output
def integrand2(x):
output = exp(-(x-100.0)**2.0)
return output
solution1 = quad(integrand1,-np.inf,np.inf)
print solution1
solution1 = quad(integrand2,-np.inf,np.inf)
print solution2
while the output is:
(1.7724538509055159, 3.668332157626072e-11)
(0.0, 0.0)
I don't understand why the second integral is wrong while the first one avoid to be wrong. It will be great to tell me some tricks to handle infinite in Python.
There is nothing inherently wrong with your code. The results you get are due to the quad algorithm being an approximate method whose accuracy, from what I gathered doing some tests, greatly depends on where the midpoint of the integration interval is located, with respect to the x-axis interval where the integrand is significantly different from 0.
The midpoint of the integration interval in case of a (-inf,+inf) interval is always 0 (see comment to relevant Fortran code here, starting at line 238), and (sadly) cannot be configured. Your integrand2 function is centered on x=100, which is too far from the quad algorithm's midpoint for it to be accurate enough.
It would be nice to be able to specify the midpoint in case of integration between -inf and +inf, but the good news is that you can implement it yourself with function decorators. First you need a wrapper for your integrand function, in order to shift it arbitrarily over the x axis:
def shift_integrand(integrand, offset):
def dec(x):
return integrand(x - offset)
return dec
This generates a new function based on any integrand you like, just shifting it along the x axis according to the offset parameter. So if you do something like this (using your integrand1 and integrand2 functions):
new_integrand1 = shift_integrand(integrand1, -1.0)
print new_integrand1(0.0)
new_integrand2 = shift_integrand(integrand2, -100.0)
print new_integrand2(0.0)
you get:
1.0
1.0
Now you need another wrapper for the quad function, in order to be able to pass a new midpoint:
def my_quad(func, a, b, midpoint=0.0, **kwargs):
if midpoint != 0.0:
func = shift_integrand(func, -midpoint)
return quad(func, a, b, **kwargs)
Finally, knowing where your integrands are centered, you can call it thus:
solution2 = my_quad(integrand2, -np.inf, np.inf, midpoint=100.0)
print solution2
Which yields:
(1.772453850905516, 1.4202639944499085e-08)