Discrete Fourier transform for odd function - python

I have an initial function u(x,0) = -sin(x) and I want to derive the FFT coefficients for an odd-parity solution in the form of u(x,t) = $\sum_{k \geq 1} a_{k} sin (kx)$. I tried using the normal expansion of the function in terms of $\exp{ikx}$ but it adds some error to the solution.
Can anyone suggest me the procedure of how to filter the Fourier coefficients which remains odd throughout the solution using numpy.fft.fft ?

If the function is inherently odd (like the sine functions) then only the imaginary part of the fft function will be non-zero. I think your problem is that your function is not periodic as it should be, you should exclude the last point:
import numpy as np
x=np.linspace(-np.pi,np.pi,50,endpoint=False)
y=-np.sin(x)
yf=np.fft.fft(y)
even_part=yf.real
odd_part=yf.imag
Here only odd_part[1] is non-zero.
If your function is not odd and you want to force it, you can either use sdt as I mentioned in the comments, or add the inverse of your function on left side then use fft.
Another point, if your input is not complex, then it's faster and more time efficient to use rfft

Related

`np.linalg.solve` get solution matrix?

np.linalg.solve solves for x in a problem of the form Ax = b.
For my application, this is done to avoid calculating the inverse explicitly (i.e inverse(A)b = x)
I'd like to access what the effective inverse is that was used to solve this problem but looking at the documentation it doesn't appear to be an option... Is there a reasonable alternative approach I can follow to recover the inverse of A?
(np.linalg.inv(A) is not accurate enough for my use case)
Following the docs and source code, it seems NumPy is calling LAPACK's _gesv to compute the solution, the documentation of which reads:
The routine solves for X the system of linear equations A*X = B, where
A is an n-by-n matrix, the columns of matrix B are individual
right-hand sides, and the columns of X are the corresponding
solutions.
The LU decomposition with partial pivoting and row interchanges is
used to factor A as A = P * L * U, where P is a permutation matrix, L is
unit lower triangular, and U is upper triangular. The factored form of
A is then used to solve the system of equations A * X = B.
The NumPy implementation for solve doesn't return the inverted matrix back to the caller, and just frees the memory for the inverted matrix, so there's no hope there. SciPy provides low-level access to LAPACK so you should be able to access the result from there. You can follow the actual implementation in LAPACK's Fortran source code dgesv.f, dgetrf.f and dgetrs.f. Alternatively, you could note that NumPy's inv still calls the same underlying code, so it might be enough for your use case... You didn't specify why is it that you need the approximate inverse matrix.

Bisect with discontinuous monotonous function: Find root using bisection for weakly monotonous function allowing for jumps

I'm looking for a Python algorithm to find the root of a function f(x) using bisection, equivalent to scipy.optimize.bisect, but allowing for discontinuities (jumps) in f. The function f is weakly monotonous.
It would be nice but not necessary for the algorithm to flag if the crossing (root) is directly 'at' a jump, and in this case to return the exact value x at which the relevant jump occurs (i.e. say the x for which sign(f(x-e)) != sign(f(x+e)) and abs(f(x-e)-f(x+e)>a for infinitesimal e>0 and non-infinitesimal a>0). It is also okay if instead the algorithm, for example, simply returns an x within a certain tolerance in this case.
As the function is only weakly monotonous, it can have flat areas, and theoretically these can occur 'at' the root, i.e. where f=0: f(x)=0 for an entire range, x in [x_0,x_1]. In this case again, nice but not necessary for the algo to flag this particularity, and to, say, ensure an x from the range [x_0,x_1] is returned.
As long as you supply (possibly very small) strictly positive positives for xtol and rtol, the function will work with discontinuities:
import numpy as np
>>> optimize.bisect(f=np.sign, a=-1, b=1, xtol=0.0001, rtol=0.001)
0.0
If you look in the scipy codebase at the C source code implementation of the function you can see that this is a very simple function that makes no assumptions on continuity. It basically takes two points which have a sign change, and switches to a smaller range with a sign change, until the iterations run out or the tolerances are met.
Given your requirements that functions might be discontinuous / flat, it is in fact necessary (for any algorithm) to supply these tolerances. Without them, it could be impossible for an optimization function to converge to a solution.

Sum of absolute values of polynomials with python numpy

Here's what I wrote: it's a classical exercise on interpolation, which I already finished and sent. I was wondering if there was another (longer) way...
q is a list of floats (the points of interpolation)
i is the index of the Lagrange polynomial
x is the point where is evaluated:
def l(q,i,x):
poly=1.0
for j,p in enumerate(q):
if j==i:
continue
poly *=(x-p)/(q[i]-p)
return poly
Then there is the function on which I'm working:
def Lambda(q,x):
value=0.0
for j in range(0,len(q)):
value+=abs(l(q,j,x))
return value
Now I can use some routines of python to find it's maxium value in the interval [0,1] and I did.
In python there is a polynomial module, with which I can easily re-define l:
import numpy.polynomial.polynomial as P
def l_poly(q,i):
poly = []
for j,p in enumerate(q):
if j==i:
continue
poly.append(p/(q[i]-p))
return P.polyfromroots(poly)
I'd like to do the same with Lambda so that I can find its maximum using the built in function of the derivative (find its zeros and so on and so forth). The problem is that it is a sum of abs(polynomials). Is there a way to do this? Or to mix the polynomial derivative and the derivative of abs(...)?
NumPy does not support arbitrary symbolic expression. It works only with polynomials, representing a polynomial as an array of coefficients. The absolute value of a polynomial is not a polynomial, so it is not a concept that NumPy has. It's a symbolic expression that can be handled by symbolic manipulation libraries like SymPy.
using the built in function of the derivative (find its zeros and so on and so forth).
There are several problems with this:
As said before, the polyder method of NumPy does not apply to this situation, since abs(polynomial) is not a polynomial.
The derivative of absolute function is undefined at 0.
The minimum or maximum of an expression involving absolute values may be attained where the derivative does not exist, so even if you could find the derivative, and somehow find its roots, you still would not solve the problem.
Looking for zeros of derivative is not a good way to minimize or maximize a function, outside of calculus exercises. Libraries like scipy.optimize implement many efficient numerical methods for this kind of problems.

How to remove the boundary effects arising due to zero padding in scipy/numpy fft?

I have made a python code to smoothen a given signal using the Weierstrass transform, which is basically the convolution of a normalised gaussian with a signal.
The code is as follows:
#Importing relevant libraries
from __future__ import division
from scipy.signal import fftconvolve
import numpy as np
def smooth_func(sig, x, t= 0.002):
N = len(x)
x1 = x[-1]
x0 = x[0]
# defining a new array y which is symmetric around zero, to make the gaussian symmetric.
y = np.linspace(-(x1-x0)/2, (x1-x0)/2, N)
#gaussian centered around zero.
gaus = np.exp(-y**(2)/t)
#using fftconvolve to speed up the convolution; gaus.sum() is the normalization constant.
return fftconvolve(sig, gaus/gaus.sum(), mode='same')
If I run this code for say a step function, it smoothens the corner, but at the boundary it interprets another corner and smoothens that too, as a result giving unnecessary behaviour at the boundary. I explain this with a figure shown in the link below.
Boundary effects
This problem does not arise if we directly integrate to find convolution. Hence the problem is not in Weierstrass transform, and hence the problem is in the fftconvolve function of scipy.
To understand why this problem arises we first need to understand the working of fftconvolve in scipy.
The fftconvolve function basically uses the convolution theorem to speed up the computation.
In short it says:
convolution(int1,int2)=ifft(fft(int1)*fft(int2))
If we directly apply this theorem we dont get the desired result. To get the desired result we need to take the fft on a array double the size of max(int1,int2). But this leads to the undesired boundary effects. This is because in the fft code, if size(int) is greater than the size(over which to take fft) it zero pads the input and then takes the fft. This zero padding is exactly what is responsible for the undesired boundary effects.
Can you suggest a way to remove this boundary effects?
I have tried to remove it by a simple trick. After smoothening the function I am compairing the value of the smoothened signal with the original signal near the boundaries and if they dont match I replace the value of the smoothened func with the input signal at that point.
It is as follows:
i = 0
eps=1e-3
while abs(smooth[i]-sig[i])> eps: #compairing the signals on the left boundary
smooth[i] = sig[i]
i = i + 1
j = -1
while abs(smooth[j]-sig[j])> eps: # compairing on the right boundary.
smooth[j] = sig[j]
j = j - 1
There is a problem with this method, because of using an epsilon there are small jumps in the smoothened function, as shown below:
jumps in the smooth func
Can there be any changes made in the above method to solve this boundary problem?
Best approach is probably to use mode = 'valid':
The output consists only of those elements that do not
rely on the zero-padding.
Unless you can wrap your signal, or the signal being processed is an excerpt from a larger signal (in which case: process full signal then crop region of interest) you are always going to have edge effects when doing convolution. You have to choose how you want to deal with them. Using mode = valid just crops them off, which is a pretty good solution. If you know that the signal is always 'step-like' you could then extend the front and end of the processed signal as appropriate.
What a symmetric filter kernel produces at the ends depends on what you assume the data is beyond the ends.
If you don't like the looks of the current result, which assumes zeros beyond both ends, try extending the data with another assumption, say a reflection of the data, or polynomial regression continuation. Extend the data on both ends by at least half the length of the filter kernel (except if your extension is zeros, which come for free with the existing zero-padding required for non-circular convolution). Then remove the added end exensions after filtering, and see if you like the looks of your assumption. If not, try another assumption. Or better yet, use actual data beyond the ends if you have such.

Is it possible to invert an arbitrary lambda in Python?

I have been playing around with Python and math lately, and I ran in to something I have yet to be able to figure out. Namely, is it possible, given an arbitrary lambda, to return the inverse of that lambda for mathematical operations? That is, invertLambda such that invertLambda(lambda x:(x+2))(2) = 0. The fact that lambdas are restricted to expressions gives me hope, but so far I have not been able to make it work. I understand that any result would have problems with functions that lose information, but I am willing to restrict users and myself to lossless functions if I have to.
Of course not: if lambda is not an injective function, you cannot invert it. Example: you cannot invert lambda mapping x to x*x, since the sign of the original x is lost.
Leaving injectivity aside, there are functions which are computationally very complex to invert. Consider, for example, restoring the original value from its md5 hash. (For a lambda calculating md5 hash, inverted function must break md5 in cryptological sense!)
Edit:
indeed, we can theoretically make lambdas invertable if we restrict the expressions which can be used there. For example, if the lambda is a linear function of 1 argument, we can easily invert it. If it's a polynomial of degree > 4, we have a problem with algebraically exact solution.
Of course, we could refrain from exact solution, and just invert the function numerically. This is possible, using, well, any method of numerical solving of the equation lambda(x) = value will do (the simplest be binary search).
I am a bit late, but I just published a python package that does this precisely. You may want to borrow some ideas from it:
https://pypi.python.org/pypi/pynverse
It essentially follows this strategy:
Figure out if the function is increasing or decreasing. For this two reference points ref1 and ref2 are needed:
In case of a finite interval, the points ref points are 1/4 and 3/4 through the interval.
In an infinite interval any two values work really.
If f(ref1) < f(ref2), the function is increasing, otherwise is decreasing.
Figure out the image of the function in the interval.
If values are provided, then those are used.
In a closed interval just calculate f(a) and f(b), where a and b are the ends of the interval.
In an open interval try to calculate f(a) and f(b), if this works those are used, otherwise it will be assume to be (-Inf, Inf).
Built a bounded function with the following conditions:
bounded_f(x):
return -Inf if x below interval, and f is increasing.
return +Inf if x below interval, and f is decreasing.
return +Inf if x above interval, and f is increasing.
return -Inf if x above interval, and f is decreasing.
return f(x) otherwise
If the required number y0 for the inverse is outside the image, raise an exception.
Find roots for bounded_f(x)-y0, by minimizing (bounded_f(x)-y0)**2, using the Brent method, making sure that the algorithm for minimising starts in a point inside the original interval by setting ref1, ref2 as brackets. As soon as if goes outside the allowed intervals, bounded_f returns infinite, forcing the algorithm to go back to search inside the interval.
Check that the solutions are accurate and they meet f(x0)=y0 to some desired precision, raising a warning otherwise.
Of course, as Vlad pointed out, the function has to be invertible for the inverse to exist, and also continuous in the domain for this to work.

Categories