I have a 2d gaussian whose center has been destroyed by pixel saturation. I need the center to be filled in because a poorly filled in center will confuse a neural network I'm trying to train. See below:
The scattered nan values I can handle fairly easily, but the large cluster in the gaussian's center I cannot.
I've tried various methods to correct this, but none seem to work in the sense that the gaussian is filled in correctly.
Here are some other similar answers that I've tried:
Python Image Processing - How to remove certain contour and blend the value with surrounding pixels?
https://docs.astropy.org/en/stable/convolution/index.html
These work well for the small discrete nans floating around the image, but don't adequately address the center cluster.
This is what I get with convolution infilling:
I've taken slices of the centers as well.
I do actually have a reference image that does not have nans. However, the scaling of the pixel values are not constant, so I've made a function that takes into account the different scaling of each pixel.
def mult_mean_surround(s_arr, c_arr, coord):
directions = np.array([[1,0],[-1,0],[0,1],[0,-1],[1,1],[1,-1],[-1,-1],[-1,1]])
s = np.array([])
for i in directions:
try:
if not np.isnan(s_arr[coord[0]+i[0],coord[1]+i[1]]):
s=np.append(s,s_arr[coord[0]+i[0],coord[1]+i[1]]/c_arr[coord[0]+i[0],coord[1]+i[1]])
except IndexError:
pass
if len(s)!=0:
s_arr[coord[0],coord[1]] = c_arr[coord[0],coord[1]] *np.mean(s)
It copies the corresponding pixel values of the reference image and scales it to the correct amount.
Ideally, it would look something like this:
The center is brighter than the rim and it looks more like a gaussian. However, this method is also substantially slower than the rest, so I'm not sure how to get around either of my issues. I've tried boosting speed with cupy to no luck as shown here: Boosting algorithm with cupy
If anyone has any helpful ideas, that would be great.
I am assuming that you are filling the 'hole' with only one gaussian.
First make a mask of all the NaNs, i.e. NaN = 1, not NaN = 0.
You can do a neighbor-count check to remove all mask pixels with no neighbors, then use a clustering algorithm (like DBSCAN) to find the largest cluster of pixels.
Calculate the centroid, width (max x - min x), and height (max y - min y) of the resulting cluster.
You can then use the following code:
import math
def gaussian_fit(query_x,query_y,
centroid_x,centroid_y,
filter_w,filter_h,
sigma_at_edge = 1.0):
x_coord = (query_x - centroid_x) * 2 / (filter_w * sigma_at_edge)
y_coord = (query_y - centroid_y) * 2 / (filter_h * sigma_at_edge)
return math.exp(-1.0*(x_coord**2+y_coord**2))
You may need to rescale the result by some constant.
Related
I would like to compute the average luminescence value vs distance to the center of an image. The approach I am thinking about is to
compute the distance between pixels in image and image center
group pixels with same distance
compute the average value of pixels for each group
plot graph of distance vs average intensity
To compute first step I use this function:
dist_img = np.zeros(gray.shape, dtype=np.uint8)
for y in range(0, h):
for x in range(0, w):
cy = gray.shape[0]/2
cx = gray.shape[1]/2
dist = math.sqrt(((x-cx)**2)+((y-cy)**2))
dist_img[y,x] = dist
Unfortunately id does give different result from the one which I compute from here
distance = math.sqrt(((1 - gray.shape[0]/2)**2 )+((1 - gray.shape[1]/2 )**2))
when I test it for pixel (1,1) I receive 20 from first code and 3605 from second.
I would appreciate suggestions on the how to correct the loop and hints on how to start with other points.Or maybe there is other way to achieve what I would like to.
You are setting up dist_img with an np.uint8 dtype. This 8 Bit unsigned integer can fit values between 0 and 255, thus 3605 can not be properly represented. Use a higher bith depth for your distance image dtype, like np.uint32.
distance = math.sqrt(((1 - gray.shape[0]/2)**2 )+((1 - gray.shape[1]/2 )**2))
Careful: gray.shape will give you (height, width) or (y, x). The other code correctly assigns gray.shape[0]/2 to the y center, this one mixes it up and uses the height for the x coordinate.
Your algorithm seems good enough, I would suggest you stick with it. You can achieve something similar to the first two steps by converting the image to polar space (e.g. with OpenCV linearToPolar), but that may be harder to debug.
From the Tutorial: https://programtalk.com/vs2/?source=python/8176/opencv-python-blueprints/chapter4/scene3D.py
I don't understand why they first undistort the images
# undistort the images
self.img1 = cv2.undistort(self.img1, self.K, self.d)
self.img2 = cv2.undistort(self.img2, self.K, self.d)
and: Compute the Essential Matrix
def _find_fundamental_matrix(self):
self.F, self.Fmask = cv2.findFundamentalMat(self.match_pts1,
self.match_pts2,
cv2.FM_RANSAC, 0.1,0.99)
def _find_essential_matrix(self):
self.E = self.K.T.dot(self.F).dot(self.K)
and also Normalize the coordinates:
first_inliers = []
second_inliers = []
for i in range(len(self.Fmask)):
if self.Fmask[i]:
# normalize and homogenize the image coordinates
first_inliers.append(self.K_inv.dot([self.match_pts1[i][0],
self.match_pts1[i][1], 1.0]))
second_inliers.append(self.K_inv.dot([self.match_pts2[i][0],
self.match_pts2[i][1], 1.0]))
Shouldn't it be either or? Or do I have some wrong understanding here?
Can please somone help me on that?
The first step, undistort, does a number of things to reverse the typical warping caused by small camera lenses. See the Wikipedia article on distortion (optics) for more background.
The last step, homogenizing the coordinates, is a completely different thing. The Wikipedia article on homogenous coordinates explains it, but the basic idea is that you add in an extra fake axis that lets you do all affine and projective transformations with chained simple matrix multiplication and then just project back to 3D at the end. Normalizing is just a step you do to make that math easier—basically, you want your extra coordinate to start off as 1.0 (multiply by the inverse of the projective norm).
The requirement for normalization is explained at page-107 of Multi-View Geometry (Hartley and Zisserman). The normalization is required in addition to the un-distortion.
If are using raw pixel values in homogeneous coordinates, the Z-coordinate which is 1 will be small compared to the X and Y co-coordinates. Eg: (X=320, Y=220, Z=1).
But if the homogenized coordinates are the image pixel positions normalized to a standard range, ie -1.0 to 1.0, then we are talking about coordinate values all of whom are kind of in the same range , Eg: (0.75, -0.89, 1.0).
If the image coordinates are of dramatically different ranges(as in the unnormalized case), then the DLT matrix produced will have a bad condition number, and consequently small variations in input image pixel positions, could produce wide variations in the result.
Please see page 107 for a very good explanation.
For a class, I've written a Laplacian of Gaussian edge detector that works in the following way.
Make a Laplacian of Gaussian mask given the variance of the Gaussian the size of the mask
Convolve it with the image
Find the zero crossings in a really shoddy manner, these are the edges of the image
If you so desire, the code for this program can be viewed here, but the most important part is where I create my Gaussian mask which depends on two functions that I've reproduced here for your convenience:
# Function for calculating the laplacian of the gaussian at a given point and with a given variance
def l_o_g(x, y, sigma):
# Formatted this way for readability
nom = ( (y**2)+(x**2)-2*(sigma**2) )
denom = ( (2*math.pi*(sigma**6) ))
expo = math.exp( -((x**2)+(y**2))/(2*(sigma**2)) )
return nom*expo/denom
# Create the laplacian of the gaussian, given a sigma
# Note the recommended size is 7 according to this website http://homepages.inf.ed.ac.uk/rbf/HIPR2/log.htm
# Experimentally, I've found 6 to be much more reliable for images with clear edges and 4 to be better for images with a lot of little edges
def create_log(sigma, size = 7):
w = math.ceil(float(size)*float(sigma))
# If the dimension is an even number, make it uneven
if(w%2 == 0):
print "even number detected, incrementing"
w = w + 1
# Now make the mask
l_o_g_mask = []
w_range = int(math.floor(w/2))
print "Going from " + str(-w_range) + " to " + str(w_range)
for i in range_inc(-w_range, w_range):
for j in range_inc(-w_range, w_range):
l_o_g_mask.append(l_o_g(i,j,sigma))
l_o_g_mask = np.array(l_o_g_mask)
l_o_g_mask = l_o_g_mask.reshape(w,w)
return l_o_g_mask
All in all, it works relatively well, even if it is extremely slow because I don't know how to leverage Numpy. However, whenever I change the size of the Gaussian mask, the thickness of the edges I detect change drastically.
Here is the image run with a size of mask equivalent to 4 times the given variance of the Gaussian:
Here is the same image run with a size of mask equivalent to 6 times the variance:
I'm kind of baffled, because the only thing the size parameter should change is the accuracy of the approximation of the Laplacian of Gaussian mask before I begin to convolve it with the image. So I ran a test where I wanted to vizualize how my mask looked given different size parameters.
Here it is with a size of 4:
Here it is with a size of 6:
The shape of the function seems to be the same as far as I can tell from the zero crossings (they happen to be spaced around four pixels apart) and their peaks. Is there a better way to check?
Any suggestions as to why this issue might be occurring or how to investigate further are appreciated.
It turns out your concept about the effect of increasing the mask size is wrong. Increasing the size doesn't actually improve the quality of approximation or the resolution of the function. To explain, instead of using a complicated 2D function like the Laplacian of the Gaussian, let's take things back down to the one dimension and pretend we are approximating the function f(x) = x^2.
Now you code for calculating the function would look like this:
def derp(theta, size):
w = math.ceil(float(size)*float(sigma))
# If the dimension is an even number, make it uneven
if(w%2 == 0):
print "even number detected, incrementing"
w = w + 1
# Now make the mask
x_mask = []
w_range = int(math.floor(w/2))
print "Going from " + str(-w_range) + " to " + str(w_range)
for i in range_inc(-w_range, w_range):
x_mask = a*i^2
If you were to increase the "size" of this function, you wouldn't be increasing the resolution, you're actually increasing the range of values of x that you're grabbing from. For example, for a size of 3 you're evaluating -1, 0, 1, for a size of 5 you're evaluating -2, -1, 0, 1, 2. Notice this doesn't increase the spacing between the pixels. This is what you're actually seeing when you talk about the zero crossing occurring the same number of pixels apart.
Consequently, when convoluting with this really silly mask, you would get really different results. But what if we went back to the Laplacian of the Gaussian?
Well, the nice property the Laplacian of the Gaussian has is that the farther you go with it, the more zero values you get. So unlike our silly x^2 function, you should be getting the same results after some time.
Now, I think the reason you didn't see this with your test cases is because they were too limited in size, because your program is too slow for you to really see the difference between size=15 and size=20, but if were to actually run those cases I think you would see that the image doesn't change that much.
This still doesn't answer what you should be doing, for that, we're going to have to look to the professionals. Namely, the implementation of the gaussian_filter in Scipy (source here).
When you look at their source code, the first thing you'll notice is that when creating their mask they're basically doing the same thing as you. They are always using an integer step size and they are scaling the size of the mask by it's standard deviation.
As to why they are doing it that way, I can't answer, since I don't have that much of an in depth knowledge of image processing or Scipy. However, this may make for a good new question to ask on SO.
So, I'm learning my self python by this tutorial and I'm stuck with exercise number 13 which says:
Write a function to uniformly shrink or enlarge an image. Your function should take an image along with a scaling factor. To shrink the image the scale factor should be between 0 and 1 to enlarge the image the scaling factor should be greater than 1.
This is not meant as a question about PIL, but to ask which algorithm to use so I can code it myself.
I've found some similar questions like this, but I dunno how to translate this into python.
Any help would be appreciated.
I've come to this:
import image
win = image.ImageWin()
img = image.Image("cy.png")
factor = 2
W = img.getWidth()
H = img.getHeight()
newW = int(W*factor)
newH = int(H*factor)
newImage = image.EmptyImage(newW, newH)
for col in range(newW):
for row in range(newH):
p = img.getPixel(col,row)
newImage.setPixel(col*factor,row*factor,p)
newImage.draw(win)
win.exitonclick()
I should do this in a function, but this doesn't matter right now. Arguments for function would be (image, factor). You can try it on OP tutorial in ActiveCode. It makes a stretched image with empty columns :.
Your code as shown is simple and effective for what's known as a Nearest Neighbor resize, except for one little bug:
p = img.getPixel(col/factor,row/factor)
newImage.setPixel(col,row,p)
Edit: since you're sending a floating point coordinate into getPixel you're not limited to Nearest Neighbor - you can implement any interpolation algorithm you want inside. The simplest thing to do is simply truncate the coordinates to int which will cause pixels to be replicated when factor is greater than 1, or skipped when factor is less than 1.
Mark has the correct approach. To get a smoother result, you replace:
p = img.getPixel(col/factor,row/factor)
with a function that takes floating point coordinates and returns a pixel interpolated from several neighboring points in the source image. For linear interpolation it takes the four nearest neigbors; for higher-order interpolation it takes a larger number of surrounding pixels.
For example, if col/factor = 3.75 and row/factor = 1.9, a linear interpolation would take the source pixels at (3,1), (3,2), (4,1), and (4,2) and give a result between those 4 rgb values, weighted most heavily to the pixel at (4,2).
You can do that using the Python Imaging Library.
Image.resize() should do what you want.
See http://effbot.org/imagingbook/image.htm
EDIT
Since you want to program this yourself without using a module, I have added an extra solution.
You will have to use the following algorithm.
load your image
extract it's size
calculate the desired size (height * factor, width * factor)
create a new EmptyImage with the desired size
Using a nested loop through the pixels (row by column) in your image.
Then (for shrinking) you remove some pixels every once in while, or for (enlarging) you duplicate some pixels in your image.
If you want you want to get fancy, you could smooth the added, or removed pixels, by averaging the rgb values with their neighbours.
Operators used to examine the spectrum, knowing the location and width of each peak and judge the piece the spectrum belongs to. In the new way, the image is captured by a camera to a screen. And the width of each band must be computed programatically.
Old system: spectroscope -> human eye
New system: spectroscope -> camera -> program
What is a good method to compute the width of each band, given their approximate X-axis positions; given that this task used to be performed perfectly by eye, and must now be performed by program?
Sorry if I am short of details, but they are scarce.
Program listing that generated the previous graph; I hope it is relevant:
import Image
from scipy import *
from scipy.optimize import leastsq
# Load the picture with PIL, process if needed
pic = asarray(Image.open("spectrum.jpg"))
# Average the pixel values along vertical axis
pic_avg = pic.mean(axis=2)
projection = pic_avg.sum(axis=0)
# Set the min value to zero for a nice fit
projection /= projection.mean()
projection -= projection.min()
#print projection
# Fit function, two gaussians, adjust as needed
def fitfunc(p,x):
return p[0]*exp(-(x-p[1])**2/(2.0*p[2]**2)) + \
p[3]*exp(-(x-p[4])**2/(2.0*p[5]**2))
errfunc = lambda p, x, y: fitfunc(p,x)-y
# Use scipy to fit, p0 is inital guess
p0 = array([0,20,1,0,75,10])
X = xrange(len(projection))
p1, success = leastsq(errfunc, p0, args=(X,projection))
Y = fitfunc(p1,X)
# Output the result
print "Mean values at: ", p1[1], p1[4]
# Plot the result
from pylab import *
#subplot(211)
#imshow(pic)
#subplot(223)
#plot(projection)
#subplot(224)
#plot(X,Y,'r',lw=5)
#show()
subplot(311)
imshow(pic)
subplot(312)
plot(projection)
subplot(313)
plot(X,Y,'r',lw=5)
show()
Given an approximate starting point, you could use a simple algorithm that finds a local maxima closest to this point. Your fitting code may be doing that already (I wasn't sure whether you were using it successfully or not).
Here's some code that demonstrates simple peak finding from a user-given starting point:
#!/usr/bin/env python
from __future__ import division
import numpy as np
from matplotlib import pyplot as plt
# Sample data with two peaks: small one at t=0.4, large one at t=0.8
ts = np.arange(0, 1, 0.01)
xs = np.exp(-((ts-0.4)/0.1)**2) + 2*np.exp(-((ts-0.8)/0.1)**2)
# Say we have an approximate starting point of 0.35
start_point = 0.35
# Nearest index in "ts" to this starting point is...
start_index = np.argmin(np.abs(ts - start_point))
# Find the local maxima in our data by looking for a sign change in
# the first difference
# From http://stackoverflow.com/a/9667121/188535
maxes = (np.diff(np.sign(np.diff(xs))) < 0).nonzero()[0] + 1
# Find which of these peaks is closest to our starting point
index_of_peak = maxes[np.argmin(np.abs(maxes - start_index))]
print "Peak centre at: %.3f" % ts[index_of_peak]
# Quick plot showing the results: blue line is data, green dot is
# starting point, red dot is peak location
plt.plot(ts, xs, '-b')
plt.plot(ts[start_index], xs[start_index], 'og')
plt.plot(ts[index_of_peak], xs[index_of_peak], 'or')
plt.show()
This method will only work if the ascent up the peak is perfectly smooth from your starting point. If this needs to be more resilient to noise, I have not used it, but PyDSTool seems like it might help. This SciPy post details how to use it for detecting 1D peaks in a noisy data set.
So assume at this point you've found the centre of the peak. Now for the width: there are several methods you could use, but the easiest is probably the "full width at half maximum" (FWHM). Again, this is simple and therefore fragile. It will break for close double-peaks, or for noisy data.
The FWHM is exactly what its name suggests: you find the width of the peak were it's halfway to the maximum. Here's some code that does that (it just continues on from above):
# FWHM...
half_max = xs[index_of_peak]/2
# This finds where in the data we cross over the halfway point to our peak. Note
# that this is global, so we need an extra step to refine these results to find
# the closest crossovers to our peak.
# Same sign-change-in-first-diff technique as above
hm_left_indices = (np.diff(np.sign(np.diff(np.abs(xs[:index_of_peak] - half_max)))) > 0).nonzero()[0] + 1
# Add "index_of_peak" to result because we cut off the left side of the data!
hm_right_indices = (np.diff(np.sign(np.diff(np.abs(xs[index_of_peak:] - half_max)))) > 0).nonzero()[0] + 1 + index_of_peak
# Find closest half-max index to peak
hm_left_index = hm_left_indices[np.argmin(np.abs(hm_left_indices - index_of_peak))]
hm_right_index = hm_right_indices[np.argmin(np.abs(hm_right_indices - index_of_peak))]
# And the width is...
fwhm = ts[hm_right_index] - ts[hm_left_index]
print "Width: %.3f" % fwhm
# Plot to illustrate FWHM: blue line is data, red circle is peak, red line
# shows FWHM
plt.plot(ts, xs, '-b')
plt.plot(ts[index_of_peak], xs[index_of_peak], 'or')
plt.plot(
[ts[hm_left_index], ts[hm_right_index]],
[xs[hm_left_index], xs[hm_right_index]], '-r')
plt.show()
It doesn't have to be the full width at half maximum — as one commenter points out, you can try to figure out where your operators' normal threshold for peak detection is, and turn that into an algorithm for this step of the process.
A more robust way might be to fit a Gaussian curve (or your own model) to a subset of the data centred around the peak — say, from a local minima on one side to a local minima on the other — and use one of the parameters of that curve (eg. sigma) to calculate the width.
I realise this is a lot of code, but I've deliberately avoided factoring out the index-finding functions to "show my working" a bit more, and of course the plotting functions are there just to demonstrate.
Hopefully this gives you at least a good starting point to come up with something more suitable to your particular set.
Late to the party, but for anyone coming across this question in the future...
Eye movement data looks very similar to this; I'd base an approach off that used by Nystrom + Holmqvist, 2010. Smooth the data using a Savitsky-Golay filter (scipy.signal.savgol_filter in scipy v0.14+) to get rid of some of the low-level noise while keeping the large peaks intact - the authors recommend using an order of 2 and a window size of about twice the width of the smallest peak you want to be able to detect. You can find where the bands are by arbitrarily removing all values above a certain y value (set them to numpy.nan). Then take the (nan)mean and (nan)standard deviation of the remainder, and remove all values greater than the mean + [parameter]*std (I think they use 6 in the paper). Iterate until you're not removing any data points - but depending on your data, certain values of [parameter] may not stabilise. Then use numpy.isnan() to find events vs non-events, and numpy.diff() to find the start and end of each event (values of -1 and 1 respectively). To get even more accurate start and end points, you can scan along the data backward from each start and forward from each end to find the nearest local minimum which has value smaller than mean + [another parameter]*std (I think they use 3 in the paper). Then you just need to count the data points between each start and end.
This won't work for that double peak; you'd have to do some extrapolation for that.
The best method might be to statistically compare a bunch of methods with human results.
You would take a large variety data and a large variety of measurement estimates (widths at various thresholds, area above various thresholds, different threshold selection methods, 2nd moments, polynomial curve fits of various degrees, pattern matching, and etc.) and compare these estimates to human measurements of the same data set. Pick the estimate method that correlates best with expert human results. Or maybe pick several methods, the best one for each of various heights, for various separations from other peaks, and etc.