Discrete Fourier Transform in Python - python

I need to use discrete Fourier transform (DFT) in Python (and inverse DFT) and the results I obtain are a bit weird, so I tried on a small example and I am not sure I understand the mistake (if it is math or coding). Here is my small version of the code:
from __future__ import division
import numpy as np
from pylab import *
pi = np.pi
def f(x):
return sin(x)
theta = np.arange(0,2*pi,2*pi/4)
k = np.arange(0,4,1)
x = f(theta)
y = np.fft.fft(x)
derivative = np.fft.ifft(1j*k*y)
print(derivative)
So what I do is to sample sin at 4 different points between 0 and 2pi and create with these numbers a vector x. Then I take the DFT of x to get y. What I want is to get the derivative of sin at the chosen points, so to do this I multiply y by k (the wave number, which in this case would be 0,1,2,3) and my the imaginary number 1j (this is because in the Fourier sum I have for each term something of the form e^{ikx}). So in the end I take the inverse DFT of 1jky and I am supposed to get the derivative of sin. But what I get is this.
[ -1.00000000e+00 -6.12323400e-17j -6.12323400e-17 +2.00000000e+00j
1.00000000e+00 +1.83697020e-16j 6.12323400e-17 -2.00000000e+00j]
when I was supposed to get this
[1,0,-1,0]
ignoring round-off errors. Can someone tell me what am I doing wrong? Thank you!

Manipulation of the spectrum must preserve this Hermitian symmetry if the inverse FFT is to yield result. Accordingly, the derivative operator in the frequency domain is defined over the lower half of the spectrum, and the upper half of the spectrum constructed from symmetry. Note that for spectrum of even sizes the value at exactly N/2 must be its own symmetry, hence must have a imaginary part which is 0. The following illustrate how to construct this derivative operator:
N = len(y)
if N%2:
derivative_operator = np.concatenate((np.arange(0,N/2,1),[0],np.arange(-N/2+1,0,1)))*1j
else:
derivative_operator = np.concatenate((np.arange(0,N/2,1),np.arange(-N//2+1,0,1)))*1j
You'd use this derivative_operator in the frequency-domain as follow:
derivative = np.fft.ifft(derivative_operator*y)
In your sample case you should then get the following result
[ 1.00000000e+00+0.j 6.12323400e-17+0.j
-1.00000000e+00+0.j -6.12323400e-17+0.j]
which is within roundoff errors of your expected [1,0,-1,0].

Related

Eigenanalysis of complex hermitian matrix: different phase angles for EIG and EIGH

I understand that eigenvectors are only defined up to a multiplicative constant. As far as I see all numpy algorithms (e.g. linalg.eig, linalg.eigh, linalg.svd) yield identical eigenvectors for real matrices, so apparently they use the same normalization. In the case of a complex matrix, however, the algorithms yield different results.
That is, the eigenvectors are the same up to a (complex) constant z. After some experimenting with eig and eigh I realised that eigh always sets the phase angle (defined as arctan(complex part/real part)) to 0 for the first component of each eigenvector whereas eig seems to start with some (arbitrary ?) non-zero phase angle.
Q: Is there a way to normalize the eigenvectors from eigh in the way eig is doing it (that is not to force phase angle = 0)?
Example
I have a complex hermitian matrix G for which I want to calculate the eigenvectors using the two following algorithms:
numpy.linalg.eig for a real/complex square matrix
numpy.linalg.eighfor a real symmetric/complex hermitian matrix (special case of 1.)
Check that G is hermitian
# check if a matrix is hermitian
def isHermitian(a, rtol=1e-05, atol=1e-08):
return np.allclose(a, a.conjugate().T, rtol=rtol, atol=atol)
print('G is hermitian:', isHermitian(G))
Out:
G is hermitian: True
Perform eigenanalysis
# eigenvectors from EIG()
l1,u1 = np.linalg.eig(G)
idx = np.argsort(l1)[::-1]
l1,u1 = l1[idx].real,u1[:,idx]
# eigenvectors from EIGH()
l2,u2 = np.linalg.eigh(G)
idx = np.argsort(l2)[::-1]
l2,u2 = l2[idx],u2[:,idx]
Check eigenvalues
print('Eigenvalues')
print('eig\t:',l1[:3])
print('eigh\t:',l2[:3])
Out:
Eigenvalues
eig : [2.55621629e+03 3.48520440e+00 3.16452447e-02]
eigh : [2.55621629e+03 3.48520440e+00 3.16452447e-02]
Both methods yield the same eigenvectors.
Check eigenvectors
Now look at the eigenvectors (e.g. 3. eigenvector) , which differ by a constant factor z.
multFactors = u1[:,2]/u2[:,2]
if np.count_nonzero(multFactors[0] == multFactors):
print("All multiplication factors are same:", multFactors[0])
else:
print("Multiplication factors are different.")
Out:
All multiplication factors are same: (-0.8916113627685007+0.45280147727156245j)
Check phase angle
Now check the phase angle for the first component of the 3. eigenvector:
print('Phase angel (in PI) for first point:')
print('Eig\t:',np.arctan2(u1[0,2].imag,u1[0,2].real)/np.pi)
print('Eigh\t:',np.arctan2(u2[0,2].imag,u2[0,2].real)/np.pi)
Out:
Phase angel (in PI) for first point:
Eig : 0.8504246311627189
Eigh : 0.0
Code to reproduce figure
num = 2
fig = plt.figure()
gs = gridspec.GridSpec(2, 3)
ax0 = plt.subplot(gs[0,0])
ax1 = plt.subplot(gs[1,0])
ax2 = plt.subplot(gs[0,1:])
ax3 = plt.subplot(gs[1,1:])
ax2r= ax2.twinx()
ax3r= ax3.twinx()
ax0.imshow(G.real,vmin=-30,vmax=30,cmap='RdGy')
ax1.imshow(G.imag,vmin=-30,vmax=30,cmap='RdGy')
ax2.plot(u1[:,num].real,label='eig')
ax2.plot((u2[:,num]).real,label='eigh')
ax3.plot(u1[:,num].imag,label='eig')
ax3.plot((u2[:,num]).imag,label='eigh')
for a in [ax0,ax1,ax2,ax3]:
a.set_xticks([])
a.set_yticks([])
ax0.set_title('Re(G)')
ax1.set_title('Im(G)')
ax2.set_title('Re('+str(num+1)+'. Eigenvector)')
ax3.set_title('Im('+str(num+1)+'. Eigenvector)')
ax2.legend(loc=0)
ax3.legend(loc=0)
fig.subplots_adjust(wspace=0, hspace=.2,top=.9)
fig.suptitle('Eigenanalysis of Hermitian Matrix G',size=16)
plt.show()
As you say, the eigenvalue problem only fixes the eigenvectors up to a scalar x. Transforming an eigenvector v as v = v*x does not change its status as an eigenvector.
There is an "obvious" way to normalize the vectors (according to the euclidean inner product np.vdot(v1, v1)), but this only fixes the amplitude of the scalar, which can be complex.
Fixing the angle or "phase" is kind of arbitrary without further context. I tried out eigh() and indeed it just makes the first entry of the vector real (with an apparently random sign!?).
eig() instead chooses to make real the vector entry with the largest real part. For example, here is what I get for a random Hermitian matrix:
n = 10
H = 0.5*(X + X.conj().T)
np.max(la.eig(H)[1], axis=0)
# returns
array([0.57590624+0.j, 0.42672485+0.j, 0.51974879+0.j, 0.54500475+0.j,
0.4644593 +0.j, 0.53492448+0.j, 0.44080532+0.j, 0.50544424+0.j,
0.48589402+0.j, 0.43431733+0.j])
This is arguably more sensible, as just picking the first entry, like eigh() does, is not very robust if the first entry happens to be very small. Picking the max value avoids this. I am not sure if eig() also fixes the sign (a random matrix is not a very good test case for this as it would be very unusual for all entries in an eigenvector to have negative real parts, which is the only case in which an unfixed sign would show up).
In any case, I would not rely on the eigensolver using any particular way of fixing phases. It's not documented and so could, in principle, change in the future. Instead, fix the phases yourself, perhaps the same way eig() does it now.
In my experience (and there are many questions here to back this up), you NEVER want to use eig when eigh is an option - eig is very slow and very unstable. The relevance of this is that I believe your question is backward - you want to normalize the eigenvectors of eig to be like those of eigh, and this you know how to do.

Generating correlated random potential using fast Fourier transform

I would like to generate a random potential in 1D or 2D spaces with a specified autocorrelation function, and according to some mathematical derivations including the Wiener-Khinchin theorem and properties of the Fourier transforms, it turns out that this can be done using the following equation:
where phi(k) is uniformly distributed in interval [0, 1). And this function satisfies , which is to ensure that the potential generated is always real.
The autocorrelation function should not affect what I am doing here, and I take a simple Gaussian distribution .
The choice of the phase term and the condition of phi(k) is based on the following properties
The phase term must have a modulus of 1 (by Wiener-Khinchin theorem, i.e. the Fourier transform of the autocorrelation of a function equals the modulus of the Fourier transform of that function);
The Fourier transform of a real function must satisfy (by directly inspecting the definition of Fourier transform in integral form).
Both the generated potential and the autocorrelation are real.
By combining these three properties, this term can only take the form as stated above.
For the relevant mathematics, you may refer to p.16 of the following pdf:
https://d-nb.info/1007346671/34
I randomly generated a numpy array using uniform distribution and concatenated the negative of the array with the original array, such that it satisfies the condition of phi(k) stated above. And then I performed the numpy (inverse) fast Fourier transform.
I have tried both 1D and 2D cases, and only the 1D case is shown below.
import numpy as np
from numpy.fft import fft, ifft
import matplotlib.pyplot as plt
## The Gaussian autocorrelation function
def c(x, V0, rho):
return V0**2 * np.exp(-x**2/rho**2)
x_min, x_max, interval_x = -10, 10, 10000
x = np.linspace(x_min, x_max, interval_x, endpoint=False)
V0 = 1
## the correlation length
rho = 1
## (Uniformly) randomly generated array for k>0
phi1 = np.random.rand(int(interval_x)/2)
phi = np.concatenate((-1*phi1[::-1], phi1))
phase = np.exp(2j*np.pi*phi)
C = c(x, V0, rho)
V = ifft(np.power(fft(C), 0.5)*phase)
plt.plot(x, V.real)
plt.plot(x, V.imag)
plt.show()
And the plot is similar to what is shown as follows:
.
However, the generated potential turns out to be complex, and the imaginary parts are of the same order of magnitude as that of the real parts, which is not expected. I have checked the math many times, but I couldn't spot any problems. So I am thinking whether it's related to the implementation problems, for example whether the data points are dense enough for Fast Fourier Transform, etc.
You have a few misunderstandings about how fft (more correctly, DFT) operates.
First note that DFT assumes that the samples of the sequence are indexed as 0, 1, ..., N-1, where N are the number of samples. Instead, you generate a sequence corresponding to indices -10000, ..., 10000. Second, note that the DFT of a real sequence will generate real values for the "frequencies" corresponding to 0 and N/2. You also seem to not take this into account.
I won't go into further details as this is out of the scope of this stackexchange site.
Just for a sanity check, the code below generates a sequence that has the properties expected for the DFT (FFT) of a real-valued sequence:
conjugate symmetry of positive and negative frequencies,
real-valued elements corresponding to frequencies 0 and N/2
sequence assumed to correspond to indices 0 to N-1
As you can see, the ifft of this sequence indeed generates a real-valued sequence
from scipy.fftpack import ifft
N = 32 # number of samples
n_range = np.arange(N) # indices over which the sequence is defined
n_range_positive = np.arange(int(N/2)+1) # the "positive frequencies" sample indices
n_range_negative = np.arange(int(N/2)+1, N) # the "negative frequencies" sample indices
# generate a complex-valued sequence with the properties expected for the DFT of a real-valued sequence
abs_FFT_positive = np.exp(-n_range_positive**2/100)
phase_FFT_positive = np.r_[0, np.random.uniform(0, 2*np.pi, int(N/2)-1), 0] # note last frequency has zero phase
FFT_positive = abs_FFT_positive * np.exp(1j * phase_FFT_positive)
FFT_negative = np.conj(np.flip(FFT_positive[1:-1]))
FFT = np.r_[FFT_positive, FFT_negative] # this is the final FFT sequence
# compute the IFFT of the above sequence
IFFT = ifft(FFT)
#plot the results
plt.plot(np.abs(FFT), '-o', label = 'FFT sequence (abs. value)')
plt.plot(np.real(IFFT), '-s', label = 'IFFT (real part)')
plt.plot(np.imag(IFFT), '-x', label = 'IFFT (imag. part)')
plt.legend()
More care needs to be taken when concatenating:
phi1 = np.random.rand(int(interval_x)//2-1)
phi = np.concatenate(([0], phi1, [0], -phi1[::-1]))
The first element is the offset (zero frequency mode). "Negative" frequencies come after the midpoint.
This gives me

Manually recover the original function from numpy rfft

I have performed a numpy.fft.rfft on a function to obtain the Fourier coefficients. Since the docs do not seem to contain the exact formula used, I have been assuming a formula found in a textbook of mine:
S(x) = a_0/2 + SUM(real(a_n) * cos(nx) + imag(a_n) * sin(nx))
where imag(a_n) is the imaginary part of the n_th element of the Fourier coefficients.
To translate this into python-speak, I have implemented the following:
def fourier(freqs, X):
# input the fourier frequencies from np.fft.rfft, and arbitrary X
const_term = np.repeat(np.real(freqs[0])/2, X.shape[0]).reshape(-1,1)
# this is the "n" part of the inside of the trig terms
trig_terms = np.tile(np.arange(1,len(freqs)), (X.shape[0],1))
sin_terms = np.imag(freqs[1:])*np.sin(np.einsum('i,ij->ij', X, trig_terms))
cos_terms = np.real(freqs[1:])*np.cos(np.einsum('i,ij->ij', X, trig_terms))
return np.concatenate((const_term, sin_terms, cos_terms), axis=1)
This should give me an [X.shape[0], 2*freqs.shape[0] - 1] array, containing at entry i,j the i_th element of X evaluated at the j_th term of the Fourier decomposition (where the j_th term is a sin term for odd j).
By summing this array over the axis of Fourier terms, I should obtain the function evaluated at the i_th term in X:
import numpy as np
import matplotlib.pyplot as plt
X = np.linspace(-1,1,50)
y = X*(X-0.8)*(X+1)
reconstructed_y = np.sum(
fourier(
np.fft.rfft(y),
X
),
axis = 1
)
plt.plot(X,y)
plt.plot(X, reconstructed_y, c='r')
plt.show()
In any case, the red line should be basically on top of the blue line. Something has gone wrong either in my assumptions about what numpy.fft.rfft returns, or in my specific implementation, but I am having a hard time tracking down the bug. Can anyone shed some light on what I've done wrong here?

Python Inverse Fourier Transform of Imaginary Odd Function

I am trying to understand how the fft and ifft functions work in python. I made a simple example of an imaginary odd function to compute the inverse Fourier transform in the hopes of getting a real odd function (as should be the case). Below is my code:
v = np.array([-1,-2,0,2,1]) * 1j
t = [-2,-1,0,1,2]
V = ifft(fftshift(v))
Clearly, the function sampled by v is an odd imaginary function, so when I compute the inverse Fourier Transform and after shifting, I should get a real odd function. But this is not the case. What am I misunderstanding about the Fourier Transform? Thanks!
You need ifftshift where you use fftshift and fftshift at the very end:
>>> w = fftshift(ifft(ifftshift(v)))
>>>
>>> np.allclose(w, w.real)
True
>>> np.allclose(w, -w[::-1])
True

Translating an FFT function from Python 2.x to Python 3.x, and computing the IFFT from it

I have an Fast Fourier Transform function in Python for versions 2.x. I want to make it in Python 3.x, but I have some problems with "xrange" and list identifiers(as my compiler said). I also have no idea how to compute Inversed FFT from my FFT without using of any non-standard libraries. Code is below. Thanks in advance...
from cmath import exp,pi
def FFT(X):
n = len(X)
w = exp(-2*pi*1j/n)
if n > 1:
X = FFT(X[::2]) + FFT(X[1::2])
for k in xrange(n/2):
xk = X[k]
X[k] = xk + w**k*X[k+n/2]
X[k+n/2] = xk - w**k*X[k+n/2]
return X
UPD: Totally reconstructed ,my FFT and constructed IFFT due to your advices.
P.S. How to close post?
There are a couple ways to convert your FFT into an IFFT. The easiest is to get rid of the minus sign inside the parameter to your exp() function for w. The next is to take the complex conjugate of the FFT of the complex conjugate of the input.
If you don't scale your forward FFT, then common practice is to scale your IFFT computation by 1/N (the length), so that IFFT(FFT()) results in the same total sum magnitude. If you do scale your FFT by 1/N, then don't scale your IFFT computation. Or scale both by 1/sqrt(N).

Categories