python manual fft botched - python

Im trying to repoduce the fft functions in python. Iv'e seen a similar question Manual fft not giving me same results as fft here, but I'm having trouble seeing if i'm doing the same error or a different one.
import numpy as np
import numpy.random as npr
N=9 ### 10 -1
MC=10
###Genrate soem data
data=complex(1,0)*npr.uniform(size=(N,MC))+complex(0,1)*npr.uniform(size=(N,MC))
naive_fft=complex(1,0)*np.zeros((N,MC))
for K in range(N):
for m in range(N):
phase=(2*np.pi*K*m)/float(N+1)
naive_fft[K,:]=naive_fft[K,:]+data[m,:]*np.exp(complex(0,1)*phase)
fft=np.fft.fft(data,axis=0)
ifft=np.fft.ifft(data,axis=0)
print('fft')
print(naive_fft-fft)
print('ifft')
print(naive_fft-ifft*(N+1.0))
Comparing my results to the numpy fft i cannot reproduce neither fft nor ifft (only the naive_fft[0,:] seem to match the fft[0,:] values.

There are several things to mention. First of all, in Python we use 1j to represent the imaginary unit, not complex(0, 1). If you would like to compare your result to numpy, then you have to check how numpy implements the fft. See the Numpy FFT docs for details. You'll find that numpy follows the most common fft definition, which uses a negative exponent. Furthermore, float(N+1) in your phase is simply wrong. It must read N.
All in all you have:
# ...
naive_fft = np.zeros((N,MC), dtype='complex')
for K in range(N):
for m in range(N):
phase=(-2*np.pi*K*m) / float(N)
naive_fft[K] += data[m] * np.exp(phase*1j)
xfft = np.fft.fft(data, axis=0)
# ...
Test it with
>>> np.isclose(xfft, naive_fft).all()
True
The inverse transformation works analogously but with a positive exponent.

Related

The Absolute Value of a Complex Number with Numpy

I have the following script in Python. I am calculating the Fourier Transform of an array. When I want to plot the results (Fourier transform) I am using the absolute value of that calculation.
However, I do not know how the absolute value of complex numbers is being produced.
Does anyone know how it calculates? I need this to reproduce in Java.
import numpy as np
import matplotlib.pyplot as plt
from numpy import fft
inp = [1,2,3,4]
res = fft.fft(inp)
print(res[1]) # returns (-2+2j) complex number
print(np.abs(res[1])) # returns 2.8284271247461903
np.abs gives magnitude of complex number i.e. sqrt(a^2 + b^2) in your case it's sqrt(8).
https://numpy.org/doc/stable/reference/generated/numpy.absolute.html
sqrt(Re(z)**2 + Im(z)**2)
for z = a + ib this becomes:
sqrt(a*a + b*b)
It's just the euclidean norm. You have to sum the square of real part and imaginary part (without the i) and do the sqrt of it.
https://www.varsitytutors.com/hotmath/hotmath_help/topics/absolute-value-complex-number
From numpy.absolute(arr, out = None, ufunc ‘absolute’) documentation:
This mathematical function helps user to calculate absolute value of each element.
For a complex number a+ib, the absolute value is sqrt(a^2 + b^2).
For complex valued pairs, a+ib, you can consider using the java Math static method hypot:
Math.hypot(a, b)
The method is an implementation of the Pythagorean theorem, sqrt(a*a + b*b) but additionally provides underflow and overflow protection.

Fastest algorithm for computing 3-D curl

I'm trying to write a section of code that computes the curl of a vector field numerically to second order with periodic boundary conditions. However, the algorithm I made is very slow and I'm wondering if anyone knows of any alternative algorithms.
To give more specific context: I'm using a 3xAxBxC numpy array as my vector field where the first axis refers to the Cartesian direction (x,y,z) and A,B,C refer to the number of bins in that Cartesian direction (i.e the resolution). So for example, I might have a vector field F = np.zeros((3,64,64,64)) where Fx = F[0] is a 64x64x64 Cartesian lattice in its own right. So far, my solution was to use the 3-point centered difference stencil to calculate the derivatives and used a nested loop to iterate over all the different dimensions using modular arithmetic to enforce the periodic boundary conditions (see below for example). However, as my resolution increases (the size of A,B,C) this begins to take a long time (upwards 2 minutes, which adds up if I do this several hundred times for my simulation - this is just one small part of a larger algorithm). I was wondering if anyone know of an alternative method for doing this?
import numpy as np
F =np.array([np.ones([128,128,128]),2*np.ones([128,128,128]),
3*np.ones([128,128,128])])
VxF =np.array([np.zeros([128,128,128]),np.zeros([128,128,128]),
np.zeros([128,128,128])])
for i in range(0,128):
for j in range(0,128):
for k in range(0,128):
VxF[0][i,j,k] = 0.5*((F[2][i,(j+1)%128,k]-
F[2][i,j-1,k])-(F[1][i,j,(k+1)%128]-F[1][i,j,k-1]))
VxF[1][i,j,k] = 0.5*((F[0][i,j,(k+1)%128]-
F[0][i,j,k-1])-(F[2][(i+1)%128,j,k]-F[2][i-1,j,k]))
VxF[2][i,j,k] = 0.5*((F[1][(i+1)%128,j,k]-
F[1][i-1,j,k])-(F[0][i,(j+1)%128,k]-F[0][i,j-1,k]))
Just to re-iterate, I'm looking for an algorithm that'll compute the curl of a vector field array to second order given periodic boundary conditions faster than the one I have. Maybe there's nothing that will do this, but I just want to check before I keep spending time running this algorithm. Thank. you everyone in advance!
There may be better tools for this, but here is a trivial 200x speedup with numba:
import numpy as np
from numba import jit
def pure_python():
F =np.array([np.ones([128,128,128]),2*np.ones([128,128,128]),
3*np.ones([128,128,128])])
VxF =np.array([np.zeros([128,128,128]),np.zeros([128,128,128]),
np.zeros([128,128,128])])
for i in range(0,128):
for j in range(0,128):
for k in range(0,128):
VxF[0][i,j,k] = 0.5*((F[2][i,(j+1)%128,k]-
F[2][i,j-1,k])-(F[1][i,j,(k+1)%128]-F[1][i,j,k-1]))
VxF[1][i,j,k] = 0.5*((F[0][i,j,(k+1)%128]-
F[0][i,j,k-1])-(F[2][(i+1)%128,j,k]-F[2][i-1,j,k]))
VxF[2][i,j,k] = 0.5*((F[1][(i+1)%128,j,k]-
F[1][i-1,j,k])-(F[0][i,(j+1)%128,k]-F[0][i,j-1,k]))
return VxF
#jit(fastmath=True)
def with_numba():
F =np.array([np.ones([128,128,128]),2*np.ones([128,128,128]),
3*np.ones([128,128,128])])
VxF =np.array([np.zeros([128,128,128]),np.zeros([128,128,128]),
np.zeros([128,128,128])])
for i in range(0,128):
for j in range(0,128):
for k in range(0,128):
VxF[0][i,j,k] = 0.5*((F[2][i,(j+1)%128,k]-
F[2][i,j-1,k])-(F[1][i,j,(k+1)%128]-F[1][i,j,k-1]))
VxF[1][i,j,k] = 0.5*((F[0][i,j,(k+1)%128]-
F[0][i,j,k-1])-(F[2][(i+1)%128,j,k]-F[2][i-1,j,k]))
VxF[2][i,j,k] = 0.5*((F[1][(i+1)%128,j,k]-
F[1][i-1,j,k])-(F[0][i,(j+1)%128,k]-F[0][i,j-1,k]))
return VxF
The pure Python version takes 13 seconds on my machine, while the numba version takes 65 ms.

Translating an FFT function from Python 2.x to Python 3.x, and computing the IFFT from it

I have an Fast Fourier Transform function in Python for versions 2.x. I want to make it in Python 3.x, but I have some problems with "xrange" and list identifiers(as my compiler said). I also have no idea how to compute Inversed FFT from my FFT without using of any non-standard libraries. Code is below. Thanks in advance...
from cmath import exp,pi
def FFT(X):
n = len(X)
w = exp(-2*pi*1j/n)
if n > 1:
X = FFT(X[::2]) + FFT(X[1::2])
for k in xrange(n/2):
xk = X[k]
X[k] = xk + w**k*X[k+n/2]
X[k+n/2] = xk - w**k*X[k+n/2]
return X
UPD: Totally reconstructed ,my FFT and constructed IFFT due to your advices.
P.S. How to close post?
There are a couple ways to convert your FFT into an IFFT. The easiest is to get rid of the minus sign inside the parameter to your exp() function for w. The next is to take the complex conjugate of the FFT of the complex conjugate of the input.
If you don't scale your forward FFT, then common practice is to scale your IFFT computation by 1/N (the length), so that IFFT(FFT()) results in the same total sum magnitude. If you do scale your FFT by 1/N, then don't scale your IFFT computation. Or scale both by 1/sqrt(N).

Sample a truncated integer power law in Python?

What function can I use in Python if I want to sample a truncated integer power law?
That is, given two parameters a and m, generate a random integer x in the range [1,m) that follows a distribution proportional to 1/x^a.
I've been searching around numpy.random, but I haven't found this distribution.
AFAIK, neither NumPy nor Scipy defines this distribution for you. However, using SciPy it is easy to define your own discrete distribution function using scipy.rv_discrete:
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
def truncated_power_law(a, m):
x = np.arange(1, m+1, dtype='float')
pmf = 1/x**a
pmf /= pmf.sum()
return stats.rv_discrete(values=(range(1, m+1), pmf))
a, m = 2, 10
d = truncated_power_law(a=a, m=m)
N = 10**4
sample = d.rvs(size=N)
plt.hist(sample, bins=np.arange(m)+0.5)
plt.show()
I don't use Python, so rather than risk syntax errors I'll try to describe the solution algorithmically. This is a brute-force discrete inversion. It should translate quite easily into Python. I'm assuming 0-based indexing for the array.
Setup:
Generate an array cdf of size m with cdf[0] = 1 as the first entry, cdf[i] = cdf[i-1] + 1/(i+1)**a for the remaining entries.
Scale all entries by dividing cdf[m-1] into each -- now they actually are CDF values.
Usage:
Generate your random values by generating a Uniform(0,1) and
searching through cdf[] until you find an entry greater than your
uniform. Return the index + 1 as your x-value.
Repeat for as many x-values as you want.
For instance, with a,m = 2,10, I calculate the probabilities directly as:
[0.6452579827864142, 0.16131449569660355, 0.07169533142071269, 0.04032862392415089, 0.02581031931145657, 0.017923832855178172, 0.013168530260947229, 0.010082155981037722, 0.007966147935634743, 0.006452579827864143]
and the CDF is:
[0.6452579827864142, 0.8065724784830177, 0.8782678099037304, 0.9185964338278814, 0.944406753139338, 0.9623305859945162, 0.9754991162554634, 0.985581272236501, 0.9935474201721358, 1.0]
When generating, if I got a Uniform outcome of 0.90 I would return x=4 because 0.918... is the first CDF entry larger than my uniform.
If you're worried about speed you could build an alias table, but with a geometric decay the probability of early termination of a linear search through the array is quite high. With the given example, for instance, you'll terminate on the first peek almost 2/3 of the time.
Use numpy.random.zipf and just reject any samples greater than or equal to m

(numpy) Wrong amplitude(?) of FFT'd array?

I'm using numpy and matplotlib to analyze data output form my simulations. There is one (apparent) inconsistency that I can't find the roots of. It's the following:
I have a signal that has a given energy a^2~1. When I use rfft to take the FFT and compute the energy in the Fourier space, it comes out to be significantly larger. To void giving the details of my data etc., here is an example with a simple sin wave:
from pylab import *
xx=np.linspace(0.,2*pi,128)
a=np.zeros(128)
for i in range(0,128):
a[i]=sin(xx[i])
aft=rfft(a)
print mean(abs(aft)**2),mean(a**2)
In principle both the numbers should be the same (at least in the numerical sense) but this is what I get out of this code:
62.523081632 0.49609375
I tried to go through numpy.fft documentation but could not find anything. A search here gave the following but I was not able to understand the explanations there:
Big FFT amplitude difference between the existing (synthesized) signal and the filtered signal
What am I missing/ misunderstanding? Any help/ pointer in this regard would be greatly appreciated.
Thanks!
Henry is right on the non-normalization part, but there is a little more to it, because you are using rfft, not fft. The following is consistent with his answer:
>>> x = np.linspace(0, 2 * np.pi, 128)
>>> y = 1 - np.sin(x)
>>> fft = np.fft.fft(y)
>>> np.mean((fft * fft.conj()).real)
191.49999999999991
>>> np.mean(y**2)
1.4960937500000004
>>> fft = fft / np.sqrt(len(fft))
>>> np.mean((fft * fft.conj()).real)
1.4960937499999991
But if you now try the same with rfft, things don't quite work out:
>>> rfft = np.fft.rfft(y)
>>> np.mean((rfft * rfft.conj()).real)
314.58462009358772
>>> rfft /= np.sqrt(len(rfft))
>>> np.mean((rfft * rfft.conj()).real)
4.8397633860551954
65
>>> np.mean((rfft * rfft.conj()).real) / len(rfft)
4.8397633860551954
The following does work properly, though:
>>> (rfft[0] * rfft[0].conj() +
... 2 * np.sum(rfft[1:] * rfft[1:].conj())).real / len(y)
1.4960937873636722
When you use rfft what you are getting is not properly the DFT of your data, but only the positive half of it, since the negative would be symmetric to it. To compute the mean, you need to consider every value other than the DC component twice, which is what the last line of code does.
In most FFT libraries, the various DFT flavours are not orthogonal. The numpy.fft library applies the necessary normalizations only during the inverse transform.
Consider the Wikipedia description of the DFT; the inverse DFT has the 1/N term that the DFT does not have (in which N is the length of the transform). To make an orthogonal version of the DFT, you need to scale the result of the un-normalised DFT by 1/sqrt(N). In this case, the transform is orthogonal (that is, if we define the orthogonal DFT as F, then the inverse DFT is the conjugate, or hermitian, transpose of F).
In your case, you can get the correct answer by simply scaling aft by 1.0/sqrt(len(a)) (note that N is found from the length of the transform; the real FFT just throws about half the values away, so it's the length of a that is important).
I suspect that the reason for leaving the normalization until the end is that in most situations, it doesn't matter and you therefore save the computational cost of doing the normalization twice. Indeed, the very quick FFTW library doesn't do any normalization in either direction, and leaves it entirely up to the user to deal with.
Edit: Just to be clear, the explanation above is not quite correct. The correct answer will not be arrived at with that simple scaling, as in your case the DC component will be added in twice, although 1.0/sqrt(len(a)) is still the correct scaling to produce the unitary transform.

Categories