Manually integrating to get inverse Laplace transform - python

I wanted to compute the inverse Laplace transform manually without resorting to any library. Specifically, I wanted to compute a bilateral laplace inverse transform. I wanted to check my understanding and tried the following manually, but not able to match the answer. Where am I going wrong?
I want to compute laplace transform of 1/(s-a). I know the answer is eat. My attempt:
a = 2
t = 0.5
f = lambda s: 1/(s-a)
def g(u):
gammah=1
s = complex(real=gammah,imag=u)
return (f(s)).real*np.cos(s.imag*t) * 2*np.exp(s.real*t)/pi
import spicy as sp
import numpy as np
sp.integrate(g,0,np.inf,limit=10000)
gives me -0.9999999
but I know the answer is exp = 2.71...

The main error is mathematical. As Wikipedia says,
integration is done along the vertical line Re(s) = γ in the complex plane such that γ is greater than the real part of all singularities of F(s)
The function F(s) = 1/(s-a) has a singularity at a, which is 2 in your example. So γ needs to be greater than 2. For example, with γ=3 the output of quad is
(2.718278877362764, 2.911191228083254e-06)
as expected. By the, your import spicy etc can't possibly work, correct import syntax would be
from scipy.integrate import quad
# ....
quad(g, 0, np.inf, limit=10000)

Related

How do I numerically integrate a function thats a product of a lorentzian and a cosinus in Python?

I am new to stackoverflow and also quite new to Python. So, I hope to ask my question in an appropriate manner.
I am running a Python code similar to this minimal example with an example function that is a product of a lorentzian with a cosinus that I want to numerically integrate:
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import quad
#minimal example:
omega_loc = 15
gamma = 5
def Lorentzian(w):
#print(w)
return (w**3)/((w/omega_loc) + 1)**2*(gamma/2)/((w-omega_loc)**2+(gamma/2)**2)
def intRe(t):
return quad(lambda w: w**(-2)*Lorentzian(w)*(1-np.cos(w*t)),0,np.inf,limit=10000)[0]
plt.figure(1)
plot_range = np.linspace(0,100,1000)
plt.plot(plot_range, [intRe(t) for t in plot_range])
Independent on the upper limit of the integration I never get the code to run and to give me a result.
When I enable the #print(w) line it seems like the code just keeps on probing the integral at random different values of w in an infinite loop (?). Also the console gives me a detection of a roundoff error.
Is there a different way for numerical integration in Python that is better suited for this kind of function than the quad function or did I do a more fundamental error?
Observations
Close to zero (1 - cos(w*t)) / w**2 tends to 0/0. We can take the taylor expansion t**2(1/2 - (w*t)**2/24).
Going to the infinity the Lorentzian is a constant and the cosine term will cause the output to oscillate indefinitely, the integral can be approximated by multiplying that term by a slowly decreasing term.
You are using a linearly spaced scale with many points. It is easier to visualize with w in log scale.
The plot looks like this before damping the cosine term
I introduced two parameters to tune the attenuation of the oscilations
def cosinus_term(w, t, damping=1e4*omega_loc):
return np.where(abs(w*t) < 1e-6, t**2*(0.5 - (w*t)**2/24.0), (1-np.exp(-abs(w/damping))*np.cos(w*t))/w**2)
def intRe(t, damping=1e4*omega_loc):
return quad(lambda w: cosinus_term(w, t)*Lorentzian(w),0,np.inf,limit=10000)[0]
Plotting with the following code
plt.figure(1)
plot_range = np.logspace(-3,3,100)
plt.plot(plot_range, [intRe(t, 1e2*omega_loc) for t in plot_range])
plt.plot(plot_range, [intRe(t, 1e3*omega_loc) for t in plot_range])
plt.xscale('log')
It runs in less than 3 minutes here, and the two results are close to each other, specially for large w, suggesting that the damping doesn't affect too much the result.

The Absolute Value of a Complex Number with Numpy

I have the following script in Python. I am calculating the Fourier Transform of an array. When I want to plot the results (Fourier transform) I am using the absolute value of that calculation.
However, I do not know how the absolute value of complex numbers is being produced.
Does anyone know how it calculates? I need this to reproduce in Java.
import numpy as np
import matplotlib.pyplot as plt
from numpy import fft
inp = [1,2,3,4]
res = fft.fft(inp)
print(res[1]) # returns (-2+2j) complex number
print(np.abs(res[1])) # returns 2.8284271247461903
np.abs gives magnitude of complex number i.e. sqrt(a^2 + b^2) in your case it's sqrt(8).
https://numpy.org/doc/stable/reference/generated/numpy.absolute.html
sqrt(Re(z)**2 + Im(z)**2)
for z = a + ib this becomes:
sqrt(a*a + b*b)
It's just the euclidean norm. You have to sum the square of real part and imaginary part (without the i) and do the sqrt of it.
https://www.varsitytutors.com/hotmath/hotmath_help/topics/absolute-value-complex-number
From numpy.absolute(arr, out = None, ufunc ‘absolute’) documentation:
This mathematical function helps user to calculate absolute value of each element.
For a complex number a+ib, the absolute value is sqrt(a^2 + b^2).
For complex valued pairs, a+ib, you can consider using the java Math static method hypot:
Math.hypot(a, b)
The method is an implementation of the Pythagorean theorem, sqrt(a*a + b*b) but additionally provides underflow and overflow protection.

Correctly unwrapping arctangent function in Numpy

I'm trying to figure out a better way to unwrap the output of numpy's arctan function. Lets say I have:
import numpy as np
pi = np.pi
angles = np.deg2rad(range(0,5*360))
tangent = np.tan(angles)
arctangent = np.arctan(tangent)
Now I have the angles returned by they are only from -pi/2 to pi/2, but I want them back unwrapped (from 0 to 10 pi). Using the numpy function np.unwrap does not work for this and I'm not sure why, so I've been using my own function:
def arctan_unwrap(phase_data):
phase = [2 *(x + pi/2) for x in phase_data]
phase = np.unwrap(phase)
phase = [(x/2.0 - np.pi/2) for x in phase]
return phase
Which does return the original angles. I'm trying to figure out a way to clean this up or have np.unwrap do this on its own but can't figure it out. Does anyone know how to do this?
Since np.tan and np.arctan both return an array, even if the input is a list, your unwrap can be written as:
np.unwrap(2*(x+np.pi/2))/2-np.pi/2
For the test values
np.unwrap(2*x)/2
works. But presumably you know what you are doing in adding the pi/2.
np.unwrap is pure numpy Python, so you can easily study its method.

Python 3.4 scipy integrate.quad dropoff

I'm trying to compute the integral of a Gaussian in python like so:
from math import exp
from scipy import stats, integrate
import scipy.interpolate as interpolate
from numpy import cumsum, random, histogram, linspace, zeros, inf, pi,sqrt
import matplotlib.pyplot as plt
A = 1
mu = 0
sigma = 1
p = lambda x: A * exp(-(((x-mu)**2))/(2*(sigma**2)))
F = lambda x: integrate.quad(p, -inf, x)[0]
Ns = 1000;
x = linspace(-50,50,Ns);
y = zeros(Ns)
yy = zeros(Ns)
for i in range(Ns):
y[i] = F(x[i])
yy[i]= p(x[i])
plt.plot(x,y)
plt.plot(x,yy)
plt.show()
but if one looks on the plot, there is a drop to zero between the range 21.0 to 22, and after 38+.
does anyone know why it is doing that? Rounding errors perhaps?
thanks!!
I think the key to understand this problem is to recall that numerical integration methods calculate a weighted sum of function values at specific knots.
The gaussian quickly goes to zero as you deviate from the mean, so basically on the interval between (-50, 50) most of the function values are zero. If the integration method fails to sample points from your small area where the function is non-zero, it will see the whole function as completely flat and thus gives you the integral 0.
So what can you do?
Instead of choosing the fixed interval of (-50,50), choose an interval based on smaller sigma values, to avoid integrating over an overly large interval of zeros.
If you go only 5, 10 or 20 standard deviations to the left and to the right, you will not see this issue, and you still have a very accurate integration result.
This is the result if you integrate from 10 standard deviations to the left and to the right.

Estimate Euclidean transformation with python

I want to do something similar to what in image analysis would be a standard 'image registration' using features.
I want to find the best transformation that transforms a set of 2D coordinates A in another one B.
But I want to add an extra constraint being that the transformation is 'rigid/Euclidean transformation' Meaning that there is no scaling but only translation and rotation.
Normally allowing scaling I would do:
from skimage import io, transform
destination = array([[1.0,2.0],[1.0,4.0],[3.0,3.0],[3.0,7.0]])
source = array([[1.2,1.7],[1.1,3.8],[3.1,3.4],[2.6,7.0]])
T = transform.estimate_transform('similarity',source,destination)
I believeestimate_transform under the hood just solves a least squares problem.
But I want to add the constraint of no scaling.
Are there any function in skimage or other packages that solve this?
Probably I need to write my own optimization problem with scipy, CVXOPT or cvxpy.
Any help to phrase/implement this optimization problem?
EDIT:
My implementation thanks to Stefan van der Walt Answer
from matplotlib.pylab import *
from scipy.optimize import *
def obj_fun(pars,x,src):
theta, tx, ty = pars
H = array([[cos(theta), -sin(theta), tx],\
[sin(theta), cos(theta), ty],
[0,0,1]])
src1 = c_[src,ones(src.shape[0])]
return sum( (x - src1.dot(H.T)[:,:2])**2 )
def apply_transform(pars, src):
theta, tx, ty = pars
H = array([[cos(theta), -sin(theta), tx],\
[sin(theta), cos(theta), ty],
[0,0,1]])
src1 = c_[src,ones(src.shape[0])]
return src1.dot(H.T)[:,:2]
res = minimize(obj_fun,[0,0,0],args=(dst,src), method='Nelder-Mead')
With that extra constraint you are no longer solving a linear least squares problem, so you'll have to use one of SciPy's minimization functions. The inner part of your minimization would set up a matrix H:
H = np.array([[np.cos(theta), -np.sin(theta), tx],
[np.sin(theta), np.cos(theta), ty],
[0, 0, 1]])
Then, you would compute the distance
|x_target - H.dot(x_source)|
for all data-points and sum the errors. Now, you have a cost function that you can send to the minimization function. You probably will also want to make use of RANSAC, which is available as skimage.measure.ransac, to reject outliers.
skimage now provides native support in the transform module.
http://scikit-image.org/docs/dev/api/skimage.transform.html#skimage.transform.estimate_transform
Somewhat easier than OpenCV I find. There is an extensive set of functions which covers all use cases.

Categories