Python OpenCV image interpolation inaccuracy - python

I am familiar with using OpenCV through the Python interface, but while using the image interpolation facilities for a somewhat non-standard problem requiring a good deal of accuracy, I noticed some unexpected inaccuracy in the results. The code below illustrates my issue. Any ideas? Am I just trying to use the interpolater outside of its design accuracy?
import numpy as np
import cv2
import matplotlib.pyplot as plt
# Source gradient image from 0 to 255
src = np.atleast_2d(np.linspace(0,255,10));
# Set up to interpolate from first pixel value to last pixel value
map_x_32 = np.linspace(0,9,101)
map_x_32 = np.atleast_2d(map_x_32).astype('float32')
map_y_32 = map_x_32*0
# Interpolate using OpenCV
output = cv2.remap(src, map_x_32, map_y_32, cv2.INTER_LINEAR)
# Truth
output_truth = np.atleast_2d(np.linspace(0,255,101));
interp_error = output - output_truth
plt.plot(interp_error[0])

I have experienced the same inaccuracy. scipy.ndimage.map_coordinates is much more accurate, but in my case also 5x slower.
You could use it in this case as:
# Interpolate using Scipy
xy = np.vstack((map_y_32[np.newaxis,:,:], map_x_32[np.newaxis,:,:]))
output_scipy = scipy.ndimage.map_coordinates(src, xy, order=1)

Related

Why does scipy.signal.correlate2d fail to work in this example?

I am trying to cross-correlate two images, and thus locate the template image on the first image, by finding the maximum correlation value.
I drew an image with some random shapes (first image), and cut out one of these shapes (template). Now, when I use scipy's correlate2d, and locate point in the correlation with maximum values, several point appear. From my knowledge, shouldn't there only be one point where the overlap is at max?
The idea behind this exercise is to take some part of an image, and then correlate that to some previous images from a database. Then I should be able to locate this part on the older images based on the maximum value of correlation.
My code looks something like this:
from matplotlib import pyplot as plt
from PIL import Image
import scipy.signal as sp
img = Image.open('test.png').convert('L')
img = np.asarray(img)
temp = Image.open('test_temp.png').convert('L')
temp = np.asarray(temp)
corr = sp.correlate2d(img, temp, boundary='symm', mode='full')
plt.imshow(corr, cmap='hot')
plt.colorbar()
coordin = np.where(corr == np.max(corr)) #Finds all coordinates where there is a maximum correlation
listOfCoordinates= list(zip(coordin[1], coordin[0]))
for i in range(len(listOfCoordinates)): #Plotting all those coordinates
plt.plot(listOfCoordinates[i][0], listOfCoordinates[i][1],'c*', markersize=5)
This yields the figure:
Cyan stars are points with max correlation value (255).
I expect there to be only one point in "corr" to have the max value of correlation, but several appear. I have tried to use different modes of correlating, but to no avail.
This is the test image I use when correlating.
This is the template, cut from the original image.
Can anyone give some insight to what I might be doing wrong here?
You are probably overflowing the numpy type uint8.
Try using:
img = np.asarray(img,dtype=np.float32)
temp = np.asarray(temp,dtype=np.float32)
Untested.
Applying
img = img - img.mean()
temp = temp - temp.mean()
before computing the 2D cross-correlation corr should give you the expected result.
Cleaning up the code, for a full example:
from imageio import imread
from matplotlib import pyplot as plt
import scipy.signal as sp
import numpy as np
img = imread('https://i.stack.imgur.com/JL2LW.png', pilmode='L')
temp = imread('https://i.stack.imgur.com/UIUzJ.png', pilmode='L')
corr = sp.correlate2d(img - img.mean(),
temp - temp.mean(),
boundary='symm',
mode='full')
# coordinates where there is a maximum correlation
max_coords = np.where(corr == np.max(corr))
plt.plot(max_coords[1], max_coords[0],'c*', markersize=5)
plt.imshow(corr, cmap='hot')

incorrect estimate_normals with open3d?

I am trying to calculate the normals of a point cloud formed by three planes each aligned with an axis.
In matlab the function pcnormals gives me a coherent result, while when I try to do the same with estimate_normals of open3d the result is incorrect.
The code is here:
import numpy as np
from open3d import *
pcd = read_point_cloud("D:\Artificial.txt",format = 'xyz')
estimate_normals(pcd, search_param = KDTreeSearchParamKNN(knn = 25))
x = np.concatenate((np.asarray(pcd.points),np.asarray(pcd.normals)),axis=1)
np.savetxt("D:\ArtificialN_python.txt",x,delimiter=',')
I also have tried with differen knn value and search_param, but the result is similar.
I enclose the images of the coloured clouds according to the third component of the normal one (red-horizontal and green-inclined) calculated with matlab and python.
matlab result:
python result:
Anybody know what that might be due to?

How to properly correct for XY image drift using a python script?

I'm new at coding and this is my first post!
As a first serious task, I'm trying to implement a simple image drift correction routine in python (so I do not need to rely on ImageJ plugins) using skimage features such as register_translation and fourier_shift.
Below you can find what I've done so far,
but here's my main questions regarding the approach:
Is the shift correction well applied?
.one thing that is not clear to me when we apply the cross-correlation peak by an FFT to identify the relative shift, is how this approach distinguishes between 'artifact' image drift and real object movement? (i.e real pixel intensity shift).
. I measured the drift for every two consecutive images and corrected the time-lapse accordingly. is there a better way to do it?
. so far I think I managed to correct at least partially the drift in my movies, but the final output still shows 1 pixel drift in a random direction, and my tiff movies look like they are 'flickering' (due to the pixel). but i should apply the drift correction in a different way!?
Looking forward for some insight, not only for my specific question but in this topic in general.
# import the basics
import numpy as np
import matplotlib.pyplot as plt
from skimage import data
from skimage.feature import register_translation
from scipy.ndimage import fourier_shift
from skimage import io
''' register translation estimates the cross-correlation peak by an FFT
i.e, identifies the relative shift between two similar-sized images
using cross-correlation in Fourier space '''
movie = mymovie
shifts = []
corrected_shift_movie = []
for img in range(0,movie.shape[0]):
if img < movie.shape[0] - 1:
shift, error, diffphase = register_translation(movie[0], movie[img + 1])
img_corr = fourier_shift(np.fft.fftn(movie[img + 1]), shift)
img_corr = np.fft.ifftn(img_corr)
shifts.append(shift)
corrected_shift_movie.append(img_corr.real)
# for plotting the xy shifts over time
shifts = np.array(shifts)
corrected_shift_movie = np.array(corrected_shift_movie)
x_drift = [shifts[i][0] for i in range(0,shifts.shape[0])]
y_drift = [shifts[i][1] for i in range(0,shifts.shape[0])]
plt.plot(x_drift, '--g' , label = ' X drift')
plt.plot(y_drift, '--r' , label = ' Y drfit')
plt.legend()
# checking drift for the new corrected movie
movie = corrected_shift_movie
shifts_corr = []
for img in range(0,movie.shape[0]):
if img < movie.shape[0] - 1:
shift, error, diffphase = register_translation(movie[0], movie[img + 1])
shifts_corr.append(shift)
shifts_corr = np.array(shifts_corr)
x_drift = [shifts_corr[i][0] for i in range(0,shifts_corr.shape[0])]
y_drift = [shifts_corr[i][1] for i in range(0,shifts_corr.shape[0])]
plt.plot(x_drift, '--g' , label = ' X drift')
plt.plot(y_drift, '--r' , label = ' Y drfit')
plt.legend()
# saving the new corrected movie
import tifffile as tiff
movie_to_save = corrected_shift_movie
with tiff.TiffWriter('drift_correction.tif', bigtiff=True) as tif:
for new_image in range(movie_to_save.shape[0]):
tif.save(movie_to_save[new_image], compress=0)

How to produce the following images (gabor patches)

I am trying to create four gabor patches, very similar to those below.
I don't need them to be identical to the pictures below, but similar.
Despite a bit of tinkering, I have been unable to reproduce these images...
I believe they were created in MATLAB originally. I don't have access to the original MATLAB code.
I have the following code in python (2.7.10):
import numpy as np
from scipy.misc import toimage # One can also use matplotlib*
data = gabor_fn(sigma = ???, theta = 0, Lambda = ???, psi = ???, gamma = ???)
toimage(data).show()
*graphing a numpy array with matplotlib
gabor_fn, from here, is defined below:
def gabor_fn(sigma,theta,Lambda,psi,gamma):
sigma_x = sigma;
sigma_y = float(sigma)/gamma;
# Bounding box
nstds = 3;
xmax = max(abs(nstds*sigma_x*numpy.cos(theta)),abs(nstds*sigma_y*numpy.sin(theta)));
xmax = numpy.ceil(max(1,xmax));
ymax = max(abs(nstds*sigma_x*numpy.sin(theta)),abs(nstds*sigma_y*numpy.cos(theta)));
ymax = numpy.ceil(max(1,ymax));
xmin = -xmax; ymin = -ymax;
(x,y) = numpy.meshgrid(numpy.arange(xmin,xmax+1),numpy.arange(ymin,ymax+1 ));
(y,x) = numpy.meshgrid(numpy.arange(ymin,ymax+1),numpy.arange(xmin,xmax+1 ));
# Rotation
x_theta=x*numpy.cos(theta)+y*numpy.sin(theta);
y_theta=-x*numpy.sin(theta)+y*numpy.cos(theta);
gb= numpy.exp(-.5*(x_theta**2/sigma_x**2+y_theta**2/sigma_y**2))*numpy.cos(2*numpy.pi/Lambda*x_theta+psi);
return gb
As you may be able to tell, the only difference (I believe) between the images is contrast. So, gabor_fn would likely needed to be altered to do allow for this (unless I misunderstand one of the params)...I'm just not sure how.
UPDATE:
from math import pi
from matplotlib import pyplot as plt
data = gabor_fn(sigma=5.,theta=pi/2.,Lambda=12.5,psi=90,gamma=1.)
unit = #From left to right, unit was set to 1, 3, 7 and 9.
bound = 0.0009/unit
fig = plt.imshow(
data
,cmap = 'gray'
,interpolation='none'
,vmin = -bound
,vmax = bound
)
plt.axis('off')
The problem you are having is a visualization problem (although, I think you are chossing too large parameters).
By default matplotlib, and scipy's (toimage) use bilinear (or trilinear) interpolation, depending on your matplotlib's configuration script. That's why your image looks so smooth. It is because your pixels values are being interpolated, and you are not displaying the raw kernel you have just calculated.
Try using matplotlib with no interpolation:
from matplotlib import pyplot as plt
plt.imshow(data, 'gray', interpolation='none')
plt.show()
For the following parameters:
data = gabor_fn(sigma=5.,theta=pi/2.,Lambda=25.,psi=90,gamma=1.)
You get this output:
If you reduce lamda to 15, you get something like this:
Additionally, the sigma you choose changes the strength of the smoothing, adding parameters vmin=-1 and vmax=1 to imshow (similar to what #kazemakase) suggested, will give you the desired contrast.
Check this guide for sensible values (and ways to use) gabor kernels:
http://scikit-image.org/docs/dev/auto_examples/plot_gabor.html
It seems like toimage scales the input data so that the min/max values are mapped to black/white.
I do not know what amplitudes to reasonably expect from gabor patches, but you should try something like this:
toimage(data, cmin=-1, cmax=1).show()
This tells toimage what range your data is in. You can try to play around with cmin and cmax, but make sure they are symmetric (i.e. cmin=-x, cmax=x) so that a value of 0 maps to grey.

How does zero-padding work for 2D arrays in scipy.fftpack?

I'm trying to improve the speed of a function that calculates the normalized cross-correlation between a search image and a template image by using the anfft module, which provides Python bindings for the FFTW C library and seems to be ~2-3x quicker than scipy.fftpack for my purposes.
When I take the FFT of my template, I need the result to be padded to the same size as my search image so that I can convolve them. Using scipy.fftpack.fftn I would just use the shape parameter to do padding/truncation, but anfft.fftn is more minimalistic and doesn't do any zero-padding itself.
When I try and do the zero padding myself, I get a very different result to what I get using shape. This example uses just scipy.fftpack, but I have the same problem with anfft:
import numpy as np
from scipy.fftpack import fftn
from scipy.misc import lena
img = lena()
temp = img[240:281,240:281]
def procrustes(a,target,padval=0):
# Forces an array to a target size by either padding it with a constant or
# truncating it
b = np.ones(target,a.dtype)*padval
aind = [slice(None,None)]*a.ndim
bind = [slice(None,None)]*a.ndim
for dd in xrange(a.ndim):
if a.shape[dd] > target[dd]:
diff = (a.shape[dd]-b.shape[dd])/2.
aind[dd] = slice(np.floor(diff),a.shape[dd]-np.ceil(diff))
elif a.shape[dd] < target[dd]:
diff = (b.shape[dd]-a.shape[dd])/2.
bind[dd] = slice(np.floor(diff),b.shape[dd]-np.ceil(diff))
b[bind] = a[aind]
return b
# using scipy.fftpack.fftn's shape parameter
F1 = fftn(temp,shape=img.shape)
# doing my own zero-padding
temp_padded = procrustes(temp,img.shape)
F2 = fftn(temp_padded)
# these results are quite different
np.allclose(F1,F2)
I suspect I'm probably making a very basic mistake, since I'm not overly familiar with the discrete Fourier transform.
Just do the inverse transform and you'll see that scipy does slightly different padding (only to top and right edges):
plt.imshow(ifftn(fftn(procrustes(temp,img.shape))).real)
plt.imshow(ifftn(fftn(temp,shape=img.shape)).real)

Categories