Modified active contour method in image processing - python

I have images like this:
from skimage.segmentation import inverse_gaussian_gradient
from skimage.filters import meijering
from skimage import color
import numpy as np
import skimage.io
import cv2
# Load image, convert RGB to GRAY and then to float64
original = color.rgb2gray(skimage.io.imread("stack.jpg"))
image = skimage.img_as_float64(original)
And I want to get result like this (each worm has one unique color and is correctly recognized at intersections, ...):
I tried Gaussian gradient:
# Make Gaussian gradient of image
gradient = inverse_gaussian_gradient(image)
gradient = gradient - np.amin(gradient)
gradient = gradient * (1 / np.amax(gradient))
gradient = np.uint8(gradient * 255)
I also tried ridge detector on raw image and then also on gradient:
# Get ridges
ridge = meijering(gradient, sigmas=[1], mode='reflect', black_ridges=0)
ridge = np.uint8(ridge * 255)
And of course I tried Canny, too:
# Get edges
edges = feature.canny(image, sigma=3)
edges = np.uint8(edges * 255)
I want to use some or all of these filters to find all worms in image (join separated curves from filters, choose good curve path for each worm at intersections, ...). I think filters are doing pretty good job - I want to program modified active contour model that combines some features from them, but I do not know, how to program it. Active contour model can be parametrized by:
Worm's length (long white smooth curve in ridge detector)
Worm's smoothness - each curve can be approximated as smooth curve (spline/Bezier/..)
Worm's symmetry - it has always two identical mirrored curves at the edge (use somehow two long black spaces along long white curve in gradient)
Worm's same size except for its ends
Anybody, please, know, how to program it in Python/C++?

Related

How to remove white lines from an image in Python [duplicate]

I have image of skin colour with repetitive pattern (Horizontal White Lines) generated by a scanner that uses a line of sensors to perceive the photo.
My Question is how to denoise the image effectively using FFT without affecting the quality of the image much, somebody told me that I have to suppress the lines that appears in the magnitude spectrum manually, but I didn't know how to do that, can you please tell me how to do it?
My approach is to use Fast Fourier Transform(FFT) to denoise the image channel by channel.
I have tried HPF, and LPF in Fourier domain, but the results were not good as you can see:
My Code:
from skimage.io import imread, imsave
from matplotlib import pyplot as plt
import numpy as np
img = imread('skin.jpg')
R = img[...,2]
G = img[...,1]
B = img[...,0]
f1 = np.fft.fft2(R)
fshift1 = np.fft.fftshift(f1)
phase_spectrumR = np.angle(fshift1)
magnitude_spectrumR = 20*np.log(np.abs(fshift1))
f2 = np.fft.fft2(G)
fshift2 = np.fft.fftshift(f2)
phase_spectrumG = np.angle(fshift2)
magnitude_spectrumG = 20*np.log(np.abs(fshift2))
f3 = np.fft.fft2(B)
fshift3 = np.fft.fftshift(f3)
phase_spectrumB = np.angle(fshift3)
magnitude_spectrumB = 20*np.log(np.abs(fshift2))
#===============================
# LPF # HPF
magR = np.zeros_like(R) # = fshift1 #
magR[magR.shape[0]//4:3*magR.shape[0]//4,
magR.shape[1]//4:3*magR.shape[1]//4] = np.abs(fshift1[magR.shape[0]//4:3*magR.shape[0]//4,
magR.shape[1]//4:3*magR.shape[1]//4]) # =0 #
resR = np.abs(np.fft.ifft2(np.fft.ifftshift(magR)))
resR = R - resR
#===============================
magnitude_spectrumR
plt.subplot(221)
plt.imshow(R, cmap='gray')
plt.title('Original')
plt.subplot(222)
plt.imshow(magnitude_spectrumR, cmap='gray')
plt.title('Magnitude Spectrum')
plt.subplot(223)
plt.imshow(phase_spectrumR, cmap='gray')
plt.title('Phase Spectrum')
plt.subplot(224)
plt.imshow(resR, cmap='gray')
plt.title('Processed')
plt.show()
Here is a simple and effective linear filtering strategy to remove the horizontal line artifact:
Outline:
Estimate the frequency of the distortion by looking for a peak in the image's power spectrum in the vertical dimension. The function scipy.signal.welch is useful for this.
Design two filters: a highpass filter with cutoff just below the distortion frequency and a lowpass filter with cutoff near DC. We'll apply the highpass filter vertically and the lowpass filter horizontally to try to isolate the distortion. We'll use scipy.signal.firwin to design these filters, though there are many ways this could be done.
Compute the restored image as "image − (hpf ⊗ lpf) ∗ image".
Code:
# Copyright 2021 Google LLC.
# SPDX-License-Identifier: Apache-2.0
import numpy as np
from scipy.ndimage import convolve1d
from scipy.signal import firwin, welch
def remove_lines(image, distortion_freq=None, num_taps=65, eps=0.025):
"""Removes horizontal line artifacts from scanned image.
Args:
image: 2D or 3D array.
distortion_freq: Float, distortion frequency in cycles/pixel, or
`None` to estimate from spectrum.
num_taps: Integer, number of filter taps to use in each dimension.
eps: Small positive param to adjust filters cutoffs (cycles/pixel).
Returns:
Denoised image.
"""
image = np.asarray(image, float)
if distortion_freq is None:
distortion_freq = estimate_distortion_freq(image)
hpf = firwin(num_taps, distortion_freq - eps,
pass_zero='highpass', fs=1)
lpf = firwin(num_taps, eps, pass_zero='lowpass', fs=1)
return image - convolve1d(convolve1d(image, hpf, axis=0), lpf, axis=1)
def estimate_distortion_freq(image, min_frequency=1/25):
"""Estimates distortion frequency as spectral peak in vertical dim."""
f, pxx = welch(np.reshape(image, (len(image), -1), 'C').sum(axis=1))
pxx[f < min_frequency] = 0.0
return f[pxx.argmax()]
Examples:
On the portrait image, estimate_distortion_freq estimates that the frequency of the distortion is 0.1094 cycles/pixel (period of 9.14 pixels). The transfer function of the filtering "image − (hpf ⊗ lpf) ∗ image" looks like this:
Here is the filtered output from remove_lines:
On the skin image, estimate_distortion_freq estimates that the frequency of the distortion is 0.08333 cycles/pixel (period of 12.0 pixels). Filtered output from remove_lines:
The distortion is mostly removed on both examples. It isn't perfect: on the portrait image, a couple ripples are still visible near the top and bottom borders, a typical defect when using large filters or Fourier methods. Still, it's a good improvement over the original images.

Otsu Thresholding and Image gradient orientation

I want to apply Otsu thresholding to image gradients (to remove noise). After that, I want to compute the gradients orientation. Unfortunately, when I do so, I only get gradient orientations between 0 and 90 degrees. Without Otsu thresholding, the values are between 0 and 360.
See my code in Python
import numpy as np
import cv2
img = cv2.imread('Ob.png',cv2.IMREAD_GRAYSCALE)
img = img.astype('float32')
img2 =
dst1 = cv2.Sobel(img,cv2.CV_64F,1,0,ksize=5)
dst2 = cv2.Sobel(img,cv2.CV_64F,0,1,ksize=5)
ret1,th1 = cv2.threshold(dst1.astype(np.uint8),0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
ret2,th2 = cv2.threshold(dst2.astype(np.uint8),0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
mag, ang = cv2.cartToPolar(dst1.astype(np.float32),dst2.astype(np.float32))
np.rad2deg(ang)
What is happening in your code is quite simple to explain:
dst1 and dst2, the output of the two Sobel filters, are the x and y components of the gradient vector. For one given pixel, the gradient vector is given by (dst1[i,j], dst2[i,j]). This vector can have any values, for example (5.8,-2.1), leading to an angle of about 340 degrees.
Next, you threshold these two images. Otsu thresholding will find a value for which the image is nicely separated into pixels of low intensity and pixels of high intensity. These are assigned values of 0 and 255, respectively. But first, you convert the floating-point images to uint8, setting all negative values to 0. So, our vector (5.8,-2.1) is first converted to (5,0), and then thresholded, after which it becomes either (255,0) or (0,0) depending on what side of the threshold the 5 falls.
Thus, we have converted the vector with an angle of 340 degrees to one with an angle of 0 degrees or no computable angle (though atan2(0,0) typically yields 0 also).
In fact, all vectors have become either (0,0), (0,255), (255,0) or (255,255), meaning that you will only find angles of 0, 45 and 90 degrees.
What you should do instead is compute the magnitude, and threshold that (I don't know if Otsu is the ideal method for such an image). Next, use only the angle for those pixels where the magnitude is above the threshold.
Another common alternative is to use Gaussian gradients instead of Sobel. There, you can set a smoothing (regularization) parameter, which allows you to remove more or less noise. I often see this implemented as a Gaussian blur followed by the Sobel filters, though it makes more sense to me to directly use Gaussian derivative filters.
If I may why the first thing you do is to convert the data to float32 ?
I think it would be more efficient to just let it does during the Sobel processing.
That is just my point of view.
The thing you named "noise" as result of the gradient filter is actually called non maxima.
Oftenly algorithm such as Canny does consist to threshold it after the Sobel filtering.
The inconvenient with this approach is to find the appropriate thresholds.
Personally I use the non maxima suppression of another algorithm.
Your code would become:
import numpy as np
import cv2
img = cv2.imread('Ob.png',cv2.IMREAD_GRAYSCALE)
dx,dy = cv2.spatialGradient(img,ksize=5)
mag = cv2.magnitude(dx.astype(np.float32),dy.astype(np.float32))
se = cv2.ximgproc_StructuredEdgeDetection()
ori = se.computeOrientation(mag)
edges_without_nms = se.edgesNms(mag,ori)
I hope it helps you.

Difficulty in detected ellipses in image

I am trying to detect ellipses in some images.
After some functions I got this edges map:
I tried using Hough transform to detect ellipses, but this transform has very high complexity, so my computer didn't finish running the transform command even after 5 hours(!).
I also tried doing connected components and got this:
In last case I also tried continue and binarized the image.
In all cases I am stuck in these steps, and have no idea how continue from here.
My mission is detect tomatoes in the image. I am approaching this by trying to detect circles and ellipses and find the radius (or average radius in ellipses case) for each one.
edited:
I add my code for the first method (the result is edge map from above):
img = cv2.imread(r'../images/assorted_tomatoes.jpg')
gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
imgAfterLight=lightreduce(img)
imgAfterGamma=gamma_correctiom(imgAfterLight,0.8)
th2 = 255 - cv2.adaptiveThreshold(imgAfterGamma,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY,5,3)
median2 = cv2.medianBlur(th2,3)
where median2 is the result of shown above in edge map
and the code for connected components:
import scipy
from scipy import ndimage
import matplotlib.pyplot as plt
import cv2
import numpy as np
fname=r'../images/assorted_tomatoes.jpg'
blur_radius = 1.0
threshold = 50
img = scipy.misc.imread(fname) # gray-scale image
gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
print(img.shape)
# smooth the image (to remove small objects)
imgf = ndimage.gaussian_filter(gray_img, blur_radius)
threshold = 80
# find connected components
labeled, nr_objects = ndimage.label(imgf > threshold)
where labeled is the result above
another edit:
this is the input image:
Input
The problem is that after edge detection, there are a lot of unnecessary edges in sub regions that disturbing for make smooth edge map
To me this looks like a classic problem for the watershed algorithm. It is designed for segmenting out touching objects like the tomatoes. My example is in Matlab (I'm on the wrong computer today) but it should translate to python easily. First convert to greyscale as you do and then invert the images
I=rgb2gray(img)
I2=imcomplement(I)
The image as is will over segment, so we remove minima that are too shallow. This can be done with the h-minima transform
I3=imhmin(I2,50);
You might need to play with the 50 value which is the height threshold for suppressing shallow minima. Now run the watershed algorithm and we get the following result.
L=watershed(I3);
The results are not perfect. It needs additional logic to remove some of the small regions, but it will give a reasonable estimate. The watershed and h-minima are contained in the skimage.morphology package in python.

Adaptive Canny Edge Detection Algorithm

I am trying to implement Canny Algorithm using python from scratch.
I am following the steps
Bilateral Filtering the image
Gradient calculation using First Derivative of Gaussian oriented in 4 different directions
def deroGauss(w=5,s=1,angle=0):
wlim = (w-1)/2
y,x = np.meshgrid(np.arange(-wlim,wlim+1),np.arange(-wlim,wlim+1))
G = np.exp(-np.sum((np.square(x),np.square(y)),axis=0)/(2*np.float64(s)**2))
G = G/np.sum(G)
dGdx = -np.multiply(x,G)/np.float64(s)**2
dGdy = -np.multiply(y,G)/np.float64(s)**2
angle = angle*math.pi/180 #converting to radians
dog = math.cos(angle)*dGdx + math.sin(angle)*dGdy
return dog
Non max suppression in all the 4 gradient image
def nonmaxsup(I,gradang):
dim = I.shape
Inms = np.zeros(dim)
weak = np.zeros(dim)
strong = np.zeros(dim)
final = np.zeros(dim)
xshift = int(np.round(math.cos(gradang*np.pi/180)))
yshift = int(np.round(math.sin(gradang*np.pi/180)))
Ipad = np.pad(I,(1,),'constant',constant_values = (0,0))
for r in xrange(1,dim[0]+1):
for c in xrange(1,dim[1]+1):
maggrad = [Ipad[r-xshift,c-yshift],Ipad[r,c],Ipad[r+xshift,c+yshift]]
if Ipad[r,c] == np.max(maggrad):
Inms[r-1,c-1] = Ipad[r,c]
return Inms
Double Thresholding and Hysteresis: Now here the real problem comes.
I am using Otsu's method toe calculate the thresholds.
Should I use the grayscale image or the Gradient images to calculate the threshold?
Because in the gradient Image the pixel intensity values are getting reduced to a very low value after bilateral filtering and then after convolving with Derivative of Gaussian it is reduced further. For example :: 28, 15
Threshold calculated using the grayscale is much above the threshold calculated using the gradient image.
Also If I use the grayscale or even the gradient images to calculate the thresholds the resultant image is not good enough and does not contain all the edges.
So practically, I have nothing left to apply Hysteresis on.
I have tried
img_edge = img_edge*255/np.max(img_edge)
to scale up the values but the result remains the same
But if I use the same thresholds with cv2.Canny the result is very good.
What actually can be wrong?
Applying the Otsu threshold from the original image doesn't make sense, it is completely unrelated to the gradient intensities.
Otsu from the gradient intensities is not perfect because the statistical distributions of noise and edges are skewed and overlap a lot.
You can try some small multiple of Otsu or some small multiple of the average. But in no case will you get perfect results by simple or hysteresis thresholding. Edge detection is an ill-posed problem.

how to locate the center of a bright spot in an image?

Here is an example of the kinds of images I'll be dealing with:
(source: csverma at pages.cs.wisc.edu)
There is one bright spot on each ball. I want to locate the coordinates of the centre of the bright spot. How can I do it in Python or Matlab? The problem I'm having right now is that more than one points on the spot has the same (or roughly the same) white colour, but what I need is to find the centre of this 'cluster' of white points.
Also, for the leftmost and rightmost images, how can I find the centre of the whole circular object?
You can simply threshold the image and find the average coordinates of what is remaining. This handles the case when there are multiple values that have the same intensity. When you threshold the image, there will obviously be more than one bright white pixel, so if you want to bring it all together, find the centroid or the average coordinates to determine the centre of all of these white bright pixels. There isn't a need to filter in this particular case. Here's something to go with in MATLAB.
I've read in that image directly, converted to grayscale and cleared off the white border that surrounds each of the images. Next, I split up the image into 5 chunks, threshold the image, find the average coordinates that remain and place a dot on where each centre would be:
im = imread('http://pages.cs.wisc.edu/~csverma/CS766_09/Stereo/callight.jpg');
im = rgb2gray(im);
im = imclearborder(im);
%// Split up images and place into individual cells
split_point = floor(size(im,2) / 5);
images = mat2cell(im, size(im,1), split_point*ones(5,1));
%// Show image to place dots
imshow(im);
hold on;
%// For each image...
for idx = 1 : 5
%// Get image
img = images{idx};
%// Threshold
thresh = img > 200;
%// Find coordinates of thresholded image
[y,x] = find(thresh);
%// Find average
xmean = mean(x);
ymean = mean(y);
%// Place dot at centre
%// Make sure you offset by the right number of columns
plot(xmean + (idx-1)*split_point, ymean, 'r.', 'MarkerSize', 18);
end
I get this:
If you want a Python solution, I recommend using scikit-image combined with numpy and matplotlib for plotting. Here's the above code transcribed in Python. Note that I saved the image referenced by the link manually on disk and named it balls.jpg:
import skimage.io
import skimage.segmentation
import numpy as np
import matplotlib.pyplot as plt
# Read in the image
# Note - intensities are floating point from [0,1]
im = skimage.io.imread('balls.jpg', True)
# Threshold the image first then clear the border
im_clear = skimage.segmentation.clear_border(im > (200.0/255.0))
# Determine where to split up the image
split_point = int(im.shape[1]/5)
# Show image in figure and hold to place dots in
plt.figure()
plt.imshow(np.dstack([im,im,im]))
# For each image...
for idx in range(5):
# Extract sub image
img = im_clear[:,idx*split_point:(idx+1)*split_point]
# Find coordinates of thresholded image
y,x = np.nonzero(img)
# Find average
xmean = x.mean()
ymean = y.mean()
# Plot on figure
plt.plot(xmean + idx*split_point, ymean, 'r.', markersize=14)
# Show image and make sure axis is removed
plt.axis('off')
plt.show()
We get this figure:
Small sidenote
I could have totally skipped the above code and used regionprops (MATLAB link, scikit-image link). You could simply threshold the image, then apply regionprops to find the centroids of each cluster of white pixels, but I figured I'd show you a more manual way so you can appreciate the algorithm and understand it for yourself.
Hope this helps!
Use a 2D convolution and then find the point with the highest intensity. You can apply a concave non-linear function (such as exp) on intensity values before applying the 2d convolution, to intensify the bright spots relative to the dimmer parts of the image. Something like conv2(exp(img),ker)

Categories