I'm trying to change the phase of an image in the Fourier domain pseudorandomly while keeping the magnitude same to get a noisy image. Here's the code for that:
import numpy as np
import matplotlib.pyplot as plt
import cv2
img_orig = cv2.imread("Lenna.png", 0)
plt.imshow(img_orig, cmap="gray");
Original Image
f = np.fft.fft2(img_orig)
mag_orig, ang_orig = np.abs(f), np.arctan2(f.imag, f.real)
np.random.seed(42)
ns = np.random.uniform(0, 6.28, size = f.shape)
ang_noise = ang_orig+ns
img_noise = np.abs(np.fft.ifft2(mag_orig*np.exp(ang_noise*1j)))
plt.imshow(img_noise, cmap="gray");
Noisy Image
But when I try to reconstruct the original image by removing the noise the way I added it, I get a noisy version of the original image. Here's the code:
f_noise = np.fft.fft2(img_noise)
mag_noise, ang_noise = np.abs(f_noise), np.arctan2(f_noise.imag, f_noise.real)
ang_recover = ang_noise-ns
img_recover = np.abs(np.fft.ifft2(mag_noise*np.exp(ang_recover*1j)))
plt.imshow(img_recover, cmap="gray");
Reconstructed Image
Any idea about why this is happening and how to remove it? I'll appreciate any help that I can get. Thank You
Add to yours code, after string
ns = np.random.uniform(0, 6.28, size = f.shape)
this make symmetric phase:
ns = np.fft.fft2(ns)
ns = np.arctan2(ns.imag, ns.real)
After adding noise in Fourier space, your image in real space will be complex (i.e will have both a magnitude and a phase). In your case you are taking the absolute value though, probably so that you can plot it, but in doing so you are removing this phase information and altering your image when you shouldn't.
In short, I think you need to remove the abs in this line:
img_noise = np.abs(np.fft.ifft2(mag_orig*np.exp(ang_noise*1j)))
Related
I've just started to learn about images frecuency domain.
I have this function:
def fourier_transform(img):
f = np.fft.fft2(img)
fshift = np.fft.fftshift(f)
magnitude_spectrum = 20*np.log(np.abs(fshift))
return magnitude_spectrum
And I want to implement this function:
def inverse_fourier_transform(magnitude_spectrum):
return img
But I don't know how.
My idea is to use magnitude_spectrum to get the original img.
How can I do it?
You are loosing phases here: np.abs(fshift).
np.abs takes only real part of your data. You could separate the amplitudes and phases by:
abs = fshift.real
ph = fshift.imag
In theory, you could work on abs and join them later together with phases and reverse FFT by np.fft.ifft2.
EDIT:
You could try this approach:
import numpy as np
import matplotlib.pyplot as plt
# single chanel image
img = np.random.random((100, 100))
img = plt.imread(r'path/to/color/img.jpg')[:,:,0]
# should be only width and height
print(img.shape)
# do the 2D fourier transform
fft_img = np.fft.fft2(img)
# shift FFT to the center
fft_img_shift = np.fft.fftshift(fft_img)
# extract real and phases
real = fft_img_shift.real
phases = fft_img_shift.imag
# modify real part, put your modification here
real_mod = real/3
# create an empty complex array with the shape of the input image
fft_img_shift_mod = np.empty(real.shape, dtype=complex)
# insert real and phases to the new file
fft_img_shift_mod.real = real_mod
fft_img_shift_mod.imag = phases
# reverse shift
fft_img_mod = np.fft.ifftshift(fft_img_shift_mod)
# reverse the 2D fourier transform
img_mod = np.fft.ifft2(fft_img_mod)
# using np.abs gives the scalar value of the complex number
# with img_mod.real gives only real part. Not sure which is proper
img_mod = np.abs(img_mod)
# show differences
plt.subplot(121)
plt.imshow(img, cmap='gray')
plt.subplot(122)
plt.imshow(img_mod, cmap='gray')
plt.show()
You cannot recover the exact original image without the phase information, so you cannot only use the magnitude of the fft2.
To use the fft2 to recover the image, you just need to call numpy.fft.ifft2. See the code below:
import numpy as np
from numpy.fft import fft2, ifft2, fftshift, ifftshift
#do the 2D fourier transform
fft_img = fftshift(fft2(img))
# reverse the 2D fourier transform
freq_filt_img = ifft2(ifftshift(fft_img))
freq_filt_img = np.abs(freq_filt_img)
freq_filt_img = freq_filt_img.astype(np.uint8)
Note that calling fftshift and ifftshift is not necessary if you just want to recover the original image directly, but I added them in case there is some plotting to be done in the middle or some other operation that requires the centering of the zero frequency.
The result of calling numpy.abs() or freq_filt_img.real (assuming positive values for each pixel) to recover the image should be the same because the imaginary part of the ifft2 should be really small. Of course, the complexity of numpy.abs() is O(n) while freq_filt_img.real is O(1)
I am trying to cross-correlate two images, and thus locate the template image on the first image, by finding the maximum correlation value.
I drew an image with some random shapes (first image), and cut out one of these shapes (template). Now, when I use scipy's correlate2d, and locate point in the correlation with maximum values, several point appear. From my knowledge, shouldn't there only be one point where the overlap is at max?
The idea behind this exercise is to take some part of an image, and then correlate that to some previous images from a database. Then I should be able to locate this part on the older images based on the maximum value of correlation.
My code looks something like this:
from matplotlib import pyplot as plt
from PIL import Image
import scipy.signal as sp
img = Image.open('test.png').convert('L')
img = np.asarray(img)
temp = Image.open('test_temp.png').convert('L')
temp = np.asarray(temp)
corr = sp.correlate2d(img, temp, boundary='symm', mode='full')
plt.imshow(corr, cmap='hot')
plt.colorbar()
coordin = np.where(corr == np.max(corr)) #Finds all coordinates where there is a maximum correlation
listOfCoordinates= list(zip(coordin[1], coordin[0]))
for i in range(len(listOfCoordinates)): #Plotting all those coordinates
plt.plot(listOfCoordinates[i][0], listOfCoordinates[i][1],'c*', markersize=5)
This yields the figure:
Cyan stars are points with max correlation value (255).
I expect there to be only one point in "corr" to have the max value of correlation, but several appear. I have tried to use different modes of correlating, but to no avail.
This is the test image I use when correlating.
This is the template, cut from the original image.
Can anyone give some insight to what I might be doing wrong here?
You are probably overflowing the numpy type uint8.
Try using:
img = np.asarray(img,dtype=np.float32)
temp = np.asarray(temp,dtype=np.float32)
Untested.
Applying
img = img - img.mean()
temp = temp - temp.mean()
before computing the 2D cross-correlation corr should give you the expected result.
Cleaning up the code, for a full example:
from imageio import imread
from matplotlib import pyplot as plt
import scipy.signal as sp
import numpy as np
img = imread('https://i.stack.imgur.com/JL2LW.png', pilmode='L')
temp = imread('https://i.stack.imgur.com/UIUzJ.png', pilmode='L')
corr = sp.correlate2d(img - img.mean(),
temp - temp.mean(),
boundary='symm',
mode='full')
# coordinates where there is a maximum correlation
max_coords = np.where(corr == np.max(corr))
plt.plot(max_coords[1], max_coords[0],'c*', markersize=5)
plt.imshow(corr, cmap='hot')
I've got a high-resolution healpix map (nside = 4096) that I want to smooth in disks of a given radius, let's say 10 arcmin.
Being very new to healpy and having read the documentation I found that one - not so good - way to do this was to perform a "cone search", that is to find around each pixels the ones inside the disk, average them and give this new value to the pixel at the center. However this is very time-consuming.
import numpy as np
import healpy as hp
kappa = hp.read_map("zs_1.0334.fits") #Reading my file
NSIDE = 4096
t = 0.00290888 #10 arcmin
new_array = []
n = len(kappa)
for i in range(n):
a = hp.query_disc(NSIDE,hp.pix2vec(NSIDE,i),t)
new_array.append(np.mean(kappa[a]))
I think the healpy.sphtfunc.smoothing function could be of some help as it states that you can enter any custom beam window function but I don't understand how this works at all...
Thanks a lot for your help !
As suggested, I can easily make use of the healpy.sphtfunc.smoothing function by specifying a custom (circular) beam window.
To compute the beam window, which was my problem, healpy.sphtfunc.beam2bl is very useful and simple in the case of a top-hat.
The appropriated l_max would roughly be 2*Nside but it can be smaller depending on specific maps. One could for example compute the angular power-spectra (the Cls) and check if it dampens for smaller l than l_max which could help gain some more time.
Thanks a lot to everyone who helped in the comments section!
since I spent a certain amount of time trying to figure out how the function smoothing was working. There is a bit of code that allows you to do a top_hat smoothing.
Cheers,
import healpy as hp
import numpy as np
import matplotlib.pyplot as plt
def top_hat(b, radius):
return np.where(abs(b)<=radius, 1, 0)
nside = 128
npix = hp.nside2npix(nside)
#create a empy map
tst_map = np.zeros(npix)
#put a source in the middle of the map with value = 100
pix = hp.ang2pix(nside, np.pi/2, 0)
tst_map[pix] = 100
#Compute the window function in the harmonic spherical space which will smooth the map.
b = np.linspace(0,np.pi,10000)
bw = top_hat(b, np.radians(45)) #top_hat function of radius 45°
beam = hp.sphtfunc.beam2bl(bw, b, nside*3)
#Smooth map
tst_map_smoothed = hp.smoothing(tst_map, beam_window=beam)
hp.mollview(tst_map_smoothed)
plt.show()
I'm new at coding and this is my first post!
As a first serious task, I'm trying to implement a simple image drift correction routine in python (so I do not need to rely on ImageJ plugins) using skimage features such as register_translation and fourier_shift.
Below you can find what I've done so far,
but here's my main questions regarding the approach:
Is the shift correction well applied?
.one thing that is not clear to me when we apply the cross-correlation peak by an FFT to identify the relative shift, is how this approach distinguishes between 'artifact' image drift and real object movement? (i.e real pixel intensity shift).
. I measured the drift for every two consecutive images and corrected the time-lapse accordingly. is there a better way to do it?
. so far I think I managed to correct at least partially the drift in my movies, but the final output still shows 1 pixel drift in a random direction, and my tiff movies look like they are 'flickering' (due to the pixel). but i should apply the drift correction in a different way!?
Looking forward for some insight, not only for my specific question but in this topic in general.
# import the basics
import numpy as np
import matplotlib.pyplot as plt
from skimage import data
from skimage.feature import register_translation
from scipy.ndimage import fourier_shift
from skimage import io
''' register translation estimates the cross-correlation peak by an FFT
i.e, identifies the relative shift between two similar-sized images
using cross-correlation in Fourier space '''
movie = mymovie
shifts = []
corrected_shift_movie = []
for img in range(0,movie.shape[0]):
if img < movie.shape[0] - 1:
shift, error, diffphase = register_translation(movie[0], movie[img + 1])
img_corr = fourier_shift(np.fft.fftn(movie[img + 1]), shift)
img_corr = np.fft.ifftn(img_corr)
shifts.append(shift)
corrected_shift_movie.append(img_corr.real)
# for plotting the xy shifts over time
shifts = np.array(shifts)
corrected_shift_movie = np.array(corrected_shift_movie)
x_drift = [shifts[i][0] for i in range(0,shifts.shape[0])]
y_drift = [shifts[i][1] for i in range(0,shifts.shape[0])]
plt.plot(x_drift, '--g' , label = ' X drift')
plt.plot(y_drift, '--r' , label = ' Y drfit')
plt.legend()
# checking drift for the new corrected movie
movie = corrected_shift_movie
shifts_corr = []
for img in range(0,movie.shape[0]):
if img < movie.shape[0] - 1:
shift, error, diffphase = register_translation(movie[0], movie[img + 1])
shifts_corr.append(shift)
shifts_corr = np.array(shifts_corr)
x_drift = [shifts_corr[i][0] for i in range(0,shifts_corr.shape[0])]
y_drift = [shifts_corr[i][1] for i in range(0,shifts_corr.shape[0])]
plt.plot(x_drift, '--g' , label = ' X drift')
plt.plot(y_drift, '--r' , label = ' Y drfit')
plt.legend()
# saving the new corrected movie
import tifffile as tiff
movie_to_save = corrected_shift_movie
with tiff.TiffWriter('drift_correction.tif', bigtiff=True) as tif:
for new_image in range(movie_to_save.shape[0]):
tif.save(movie_to_save[new_image], compress=0)
So I have an array (it's large - 2048x2048), and I would like to do some element wise operations dependent on where they are. I'm very confused how to do this (I was told not to use for loops, and when I tried that my IDE froze and it was going really slow).
Onto the question:
h = aperatureimage
h[:,:] = 0
indices = np.where(aperatureimage>1)
for True in h:
h[index] = np.exp(1j*k*z)*np.exp(1j*k*(x**2+y**2)/(2*z))/(1j*wave*z)
So I have an index, which is (I'm assuming here) essentially a 'cropped' version of my larger aperatureimage array. *Note: Aperature image is a grayscale image converted to an array, it has a shape or text on it, and I would like to find all the 'white' regions of the aperature and perform my operation.
How can I access the individual x/y values of index which will allow me to perform my exponential operation? When I try index[:,None], leads to the program spitting out 'ValueError: broadcast dimensions too large'. I also get array is not broadcastable to correct shape. Any help would be appreciated!
One more clarification: x and y are the only values I would like to change (essentially the points in my array where there is white, z, k, and whatever else are defined previously).
EDIT:
I'm not sure the code I posted above is correct, it returns two empty arrays. When I do this though
index = (aperatureimage==1)
print len(index)
Actually, nothing I've done so far works correctly. I have a 2048x2048 image with a 128x128 white square in the middle of it. I would like to convert this image to an array, look through all the values and determine the index values (x,y) where the array is not black (I only have white/black, bilevel image didn't work for me). I would then like to take all the values (x,y) where the array is not 0, and multiply them by the h[index] value listed above.
I can post more information if necessary. If you can't tell, I'm stuck.
EDIT2: Here's some code that might help - I think I have the problem above solved (I can now access members of the array and perform operations on them). But - for some reason the Fx values in my for loop never increase, it loops Fy forever....
import sys, os
from scipy.signal import *
import numpy as np
import Image, ImageDraw, ImageFont, ImageOps, ImageEnhance, ImageColor
def createImage(aperature, type):
imsize = aperature*8
middle = imsize/2
im = Image.new("L", (imsize,imsize))
draw = ImageDraw.Draw(im)
box = ((middle-aperature/2, middle-aperature/2), (middle+aperature/2, middle+aperature/2))
import sys, os
from scipy.signal import *
import numpy as np
import Image, ImageDraw, ImageFont, ImageOps, ImageEnhance, ImageColor
def createImage(aperature, type):
imsize = aperature*8 #Add 0 padding to make it nice
middle = imsize/2 # The middle (physical 0) of our image will be the imagesize/2
im = Image.new("L", (imsize,imsize)) #Make a grayscale image with imsize*imsize pixels
draw = ImageDraw.Draw(im) #Create a new draw method
box = ((middle-aperature/2, middle-aperature/2), (middle+aperature/2, middle+aperature/2)) #Bounding box for aperature
if type == 'Rectangle':
draw.rectangle(box, fill = 'white') #Draw rectangle in the box and color it white
del draw
return im, middle
def Diffraction(aperaturediameter = 1, type = 'Rectangle', z = 2000000, wave = .001):
# Constants
deltaF = 1/8 # Image will be 8mm wide
z = 1/3.
wave = 0.001
k = 2*pi/wave
# Now let's get to work
aperature = aperaturediameter * 128 # Aperaturediameter (in mm) to some pixels
im, middle = createImage(aperature, type) #Create an image depending on type of aperature
aperaturearray = np.array(im) # Turn image into numpy array
# Fourier Transform of Aperature
Ta = np.fft.fftshift(np.fft.fft2(aperaturearray))/(len(aperaturearray))
# Transforming and calculating of Transfer Function Method
H = aperaturearray.copy() # Copy image so H (transfer function) has the same dimensions as aperaturearray
H[:,:] = 0 # Set H to 0
U = aperaturearray.copy()
U[:,:] = 0
index = np.nonzero(aperaturearray) # Find nonzero elements of aperaturearray
H[index[0],index[1]] = np.exp(1j*k*z)*np.exp(-1j*k*wave*z*((index[0]-middle)**2+(index[1]-middle)**2)) # Free space transfer for ap array
Utfm = abs(np.fft.fftshift(np.fft.ifft2(Ta*H))) # Compute intensity at distance z
# Fourier Integral Method
apindex = np.nonzero(aperaturearray)
U[index[0],index[1]] = aperaturearray[index[0],index[1]] * np.exp(1j*k*((index[0]-middle)**2+(index[1]-middle)**2)/(2*z))
Ufim = abs(np.fft.fftshift(np.fft.fft2(U))/len(U))
# Save image
fim = Image.fromarray(np.uint8(Ufim))
fim.save("PATH\Fim.jpg")
ftfm = Image.fromarray(np.uint8(Utfm))
ftfm.save("PATH\FTFM.jpg")
print "that may have worked..."
return
if __name__ == '__main__':
Diffraction()
You'll need numpy, scipy, and PIL to work with this code.
When I run this, it goes through the code, but there is no data in them (everything is black). Now I have a real problem here as I don't entirely understand the math I'm doing (this is for HW), and I don't have a firm grasp on Python.
U[index[0],index[1]] = aperaturearray[index[0],index[1]] * np.exp(1j*k*((index[0]-middle)**2+(index[1]-middle)**2)/(2*z))
Should that line work for performing elementwise calculations on my array?
Could you perhaps post a minimal, yet complete, example? One that we can copy/paste and run ourselves?
In the meantime, in the first two lines of your current example:
h = aperatureimage
h[:,:] = 0
you set both 'aperatureimage' and 'h' to 0. That's probably not what you intended. You might want to consider:
h = aperatureimage.copy()
This generates a copy of aperatureimage while your code simply points h to the same array as aperatureimage. So changing one changes the other.
Be aware, copying very large arrays might cost you more memory then you would prefer.
What I think you are trying to do is this:
import numpy as np
N = 2048
M = 64
a = np.zeros((N, N))
a[N/2-M:N/2+M,N/2-M:N/2+M]=1
x,y = np.meshgrid(np.linspace(0, 1, N), np.linspace(0, 1, N))
b = a.copy()
indices = np.where(a>0)
b[indices] = np.exp(x[indices]**2+y[indices]**2)
Or something similar. This, in any case, sets some values in 'b' based on the x/y coordinates where 'a' is bigger than 0. Try visualizing it with imshow. Good luck!
Concerning the edit
You should normalize your output so it fits in the 8 bit integer. Currently, one of your arrays has a maximum value much larger than 255 and one has a maximum much smaller. Try this instead:
fim = Image.fromarray(np.uint8(255*Ufim/np.amax(Ufim)))
fim.save("PATH\Fim.jpg")
ftfm = Image.fromarray(np.uint8(255*Utfm/np.amax(Utfm)))
ftfm.save("PATH\FTFM.jpg")
Also consider np.zeros_like() instead of copying and clearing H and U.
Finally, I personally very much like working with ipython when developing something like this. If you put the code in your Diffraction function in the top level of your script (in place of 'if __ name __ &c.'), then you can access the variables directly from ipython. A quick command like np.amax(Utfm) would show you that there are indeed values!=0. imshow() is always nice to look at matrices.