I m trying to extract borders of a sample (see figure below). The gradient between it and the air seems important so I tried to used OpenCV Canny function, but the result is not satisfying (the second figure)... How I could improve the result?
You can find the picture here : https://filesender.renater.fr/?s=download&token=887799f6-f580-4579-8f75-148be4270cb0
import numpy as np
import cv2
from scipy import signal
median_optic_decentre = cv2.imread('median_plot.tiff',0)
edges = cv2.Canny(median_optic_decentre,10,60,apertureSize = 3)
Another method of obtaining edges is using the Laplacian operator (described in the OpenCV docs here). If you apply the Laplacian operator followed by some morphological operations, specifically morphological opening, the results look a bit better (if I'm understanding your question correctly):
import cv2
import matplotlib.pyplot as plt
img = cv2.imread('median_plot.tiff')
laplacian = cv2.Laplacian(img,cv2.CV_64F)
S = cv2.getStructuringElement(cv2.MORPH_CROSS,(3,3))
morph_opened_laplacian = cv2.dilate(cv2.erode(laplacian, S), S)
plt.subplot(1,3,1)
plt.gray()
plt.title("Original")
plt.imshow(img)
plt.subplot(1,3,2)
plt.title("Laplacian")
plt.imshow(laplacian)
plt.subplot(1,3,3)
plt.title("Opened Laplacian")
plt.imshow(morph_opened_laplacian)
plt.show()
Output:
Related
EDIT: In the initial question the color map was reversed which caused some confusion, I changed the images and code to match so standard behavior.
Morphological opening and closing are idempotent operations (see Univ. Auckland):
Opening is an idempotent operation: once an image has been opened,
subsequent openings with the same structuring element have no further effect on that image:
(f ∘ s) ∘ s = f ∘ s.
I have the following image:
When trying to remove the skinny rectangles on the top right using OpenCV, I noticed that when I apply Opening iteratively with the same kernel, I get the desired result. To my understanding of how Opening/Closing works this should not be the case. Did I misunderstand anything or is there something wrong with the implementation in OpenCV?
Here is my example code and the results:
import cv2 as cv
import numpy as np
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
matplotlib.rc('image', cmap='gray')
# load image
img = cv.imread("image.jpg", 0)
# apply binary thresholding to circumvent jpg compression
img = (img > 128).astype(np.uint8)
# adding a border around the image solves the issue
# img = cv.copyMakeBorder(img, 1, 1, 1, 1, cv.BORDER_CONSTANT)
kernel = cv.getStructuringElement(cv.MORPH_RECT,(20,5))
A = cv.morphologyEx(img, cv.MORPH_OPEN, kernel)
B = cv.morphologyEx(A, cv.MORPH_OPEN, kernel)
C = cv.morphologyEx(B, cv.MORPH_OPEN, kernel)
f, ax = plt.subplots(1,3)
ax[0].imshow(A)
ax[0].set_title("A")
ax[1].imshow(B)
ax[1].set_title("B")
ax[2].imshow(C)
ax[2].set_title("C")
plt.show()
Here the result, notice the removed rectangles on the right border after each iteration:
i would like to ask you one question : wanted to implement a code which clarifies a picture done by hand ( by pen), let us consider such image
it is done by blue pen, which should be converted to the gray scale image using following code
from PIL import Image
user_test = filename
col = Image.open(user_test)
gray = col.convert('L')
bw = gray.point(lambda x: 0 if x<100 else 255, '1')
bw.save("bw_image.jpg")
bw
img_array = cv2.imread("bw_image.jpg", cv2.IMREAD_GRAYSCALE)
img_array = cv2.bitwise_not(img_array)
print(img_array.size)
plt.imshow(img_array, cmap = plt.cm.binary)
plt.show()
img_size = 28
new_array = cv2.resize(img_array, (img_size,img_size))
plt.imshow(new_array, cmap = plt.cm.binary)
plt.show()
idea is that i am taking image from camera directly, but it is losing structure of digit and comes only empty and black picture, like this
therefore computer can't understand which digit it is and neural networks fails to predict its label correctly, could you please tell me which transformation should i apply in order to detect this image much more precisely ?
edit :
i have apply following code
from PIL import Image
user_test = filename
col = Image.open(user_test)
gray = col.convert('L')
plt.hist(img_array)
plt.show()
and got
You have several issues here, and you can methodically address them.
First of all you're having an issue with thresholding properly.
As I suggested in earlier comments, you can easily see why your original thresholding was unsuccessful.
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image
from matplotlib import cm
im = Image.open('whatever_path_you_choose.jpg').convert("L")
im = np.asarray(im)
plt.hist(im.flatten(), bins=np.arange(255));
Looking at the image you gave:
Clearly the threshold should be somewhere between 100-200, not as in your original code. Also note that this distribution isn't very bimodal - so I'm not sure otsu's method would work well here.
If we eyeball it (this can be tuned), we can see that thresholding at 145-ish gives decent results in terms of segmentation.
im_thresh = (im >= 145)
plt.imshow(im_thresh, cmap=cm.gray)
Now you might have an additional issue that you have horizontal lines, you can address this by writing on blank paper as suggested. This wasn't exactly your question but I will try to address it anyways (in a naive fashion). You can try a naive solution of using a sobel filter (think of it as the derivative of the image to get the lines), followed by a median filter to get the approximately most common pixel intensity - the size of the filter might have to vary for different digits though. This should clear up some of the lines. For a more rigorous approach try reading up on hough line transform for detecting horizontal lines and try to whiten them out.
This is my very naive approach:
from skimage.filters import sobel
from scipy.ndimage import median_filter
#Sobel filter reverses intensities so subtracting the result from 1.0 turns it back to the original
plt.imshow(1.0 - median_filter(sobel(im_thresh), [10, 3]), cmap=cm.gray)
You can try cropping automatically afterwards. Honestly I think most neural networks that could recognize MNIST-like digits could recognize the result I posted at the end as well.
Try using skimage package like this. This has inbuilt functions for image processing:
from skimage import io
from skimage.restoration import denoise_tv_chambolle
from skimage.filters import threshold_otsu
image = io.imread('path/to/your/image', as_gray=True)
# Denoising
denoised_image = denoise_tv_chambolle(image, weight=0.1, multichannel=True)
# Thresholding
threshold = threshold_otsu(denoised_image)
thresholded_image = denoised_image > threshold
I have a bunch of images of cells and I want to extract where the cells are. I'm currently using circular Hough transforms and it works alright, but screws up regularly. Wondering if people have any pointers. Sorry this isn't a question specifically about software - it's how to get better performance in tis image segmentation problem.
I've tried other stuff in skimage with limited success, like the contour finding, edge detection and active contours. Nothing worked well out of the box, although it could just be that I didn't fiddle with the parameters correctly. I haven't done much image segmentation, and I don't really know how this stuff works or what the best ways are to jury-rig it.
Here is the code I currently am using that takes a grayscale image as a numpy array and looks for the cell as a circle:
import cv2
import numpy as np
smallest_dim = min(img.shape)
min_rad = int(img.shape[0]*0.05)
max_rad = int(img.shape[0]*0.5) #0.5
circles = cv2.HoughCircles((img*255).astype(np.uint8),cv2.HOUGH_GRADIENT,1,50,
param1=50,param2=30,minRadius=min_rad,maxRadius=max_rad)
circles = np.uint16(np.around(circles))
x, y, r = circles[0,:][:1][0]
Here is an example where the code found the wrong circle as the boundary of the cell. It seems like it got confused by the gunk that is surrounding the cell:
I think one issue may be the plotting of circle (coordinates may be wrong).
Also, like #Nicos mentioned, there is alot of tweaking involved with traditional image processing to make specific cases work (while more recent machine learning approaches, the tweaking is so that models do not over-train), my attempt with skimage is displayed below. Radius range, number of circles, edge detection image, all needs to be tweaked... given the potential variation among and within images. Within this image, there are, at least to me, 3 circles with varying gradient, from the canny edge detection image, you can sort of see we are getting more than 3 circles, further, the "illumination" seems to vary at different locations (due to this being an sem image)?!
import matplotlib.pyplot as plt
import numpy as np
import imageio
from skimage import data, color
from skimage.transform import hough_circle, hough_circle_peaks
from skimage.feature import canny
from skimage.draw import circle_perimeter
from skimage.util import img_as_ubyte
!wget https://i.stack.imgur.com/2tsWw.jpg
# rgb to gray https://stackoverflow.com/a/51571053/868736
im = imageio.imread('2tsWw.jpg')
gray = lambda rgb : np.dot(rgb[... , :3] , [0.299 , 0.587, 0.114])
gray = gray(im)
image = np.array(gray[60:220,210:450])
plt.imshow(image,cmap='gray')
edges = canny(image, sigma=3,)
plt.imshow(edges,cmap='gray')
overlayimage = np.copy(image)
# https://scikit-image.org/docs/dev/auto_examples/edges/plot_circular_elliptical_hough_transform.html
hough_radii = np.arange(30, 60, 2)
hough_res = hough_circle(edges, hough_radii)
# Select the most prominent X circles
x=1
accums, cx, cy, radii = hough_circle_peaks(hough_res, hough_radii,
total_num_peaks=x)
# Draw them
fig, ax = plt.subplots(ncols=1, nrows=1, figsize=(10, 4))
#image = color.gray2rgb(image)
for center_y, center_x, radius in zip(cy, cx, radii):
circy, circx = circle_perimeter(center_y, center_x, radius)
overlayimage[circy, circx] = 255
print(radii)
ax.imshow(overlayimage,cmap='gray')
plt.show()
Here is my image:
I want to find the center of mass in this image. I can find the approximate location of the center of mass by drawing two perpendicular lines as shown in this image:
I want to find it using an image processing tool in python. I have a little experience in the image processing library of python (scikit-image) but, I am not sure if this library could help finding the center of mass in my image.
I was wondering if anybody could help me to do it. I will be happy if it is possible to find the center of mass in my image using any other library in python.
Thanks in advance for your help!
skimage.measure.regionprops will do what you want. Here's an example:
import imageio as iio
from skimage import filters
from skimage.color import rgb2gray # only needed for incorrectly saved images
from skimage.measure import regionprops
image = rgb2gray(iio.imread('eyeball.png'))
threshold_value = filters.threshold_otsu(image)
labeled_foreground = (image > threshold_value).astype(int)
properties = regionprops(labeled_foreground, image)
center_of_mass = properties[0].centroid
weighted_center_of_mass = properties[0].weighted_centroid
print(center_of_mass)
On my machine and with your example image, I get (228.48663375508113, 200.85290046969845).
We can make a pretty picture:
import matplotlib.pyplot as plt
from skimage.color import label2rgb
colorized = label2rgb(labeled_foreground, image, colors=['black', 'red'], alpha=0.1)
fig, ax = plt.subplots()
ax.imshow(colorized)
# Note the inverted coordinates because plt uses (x, y) while NumPy uses (row, column)
ax.scatter(center_of_mass[1], center_of_mass[0], s=160, c='C0', marker='+')
plt.show()
That gives me this output:
You'll note that there's some bits of foreground that you probably don't want in there, like at the bottom right of the picture. That's a whole nother answer, but you can look at scipy.ndimage.label, skimage.morphology.remove_small_objects, and more generally at skimage.segmentation.
You can use the scipy.ndimage.center_of_mass function to find the center of mass of an object.
For example, using this question's image:
wget https://i.stack.imgur.com/ffDLD.jpg
import matplotlib.image as mpimg
import scipy.ndimage as ndi
img = mpimg.imread('ffDLD.jpg')
img = img.mean(axis=-1).astype('int') # in grayscale
cy, cx = ndi.center_of_mass(img)
print(cy, cx)
228.75223713169711 197.40991592129836
You need to know about Image Moments.
Here there's a tutorial of how use it with opencv and python
After an Image Processing, from fft's, filters, and thresholding, I obtained the following image:
So, I'm wondering how to extract those centers. Does exist any function from OpenCV? (such as HoughCircles for detecting circles?) or Do I need to use clustering methods?
Maybe it is useful for you to know the code I used:
import cv2
import numpy as np
import scipy.ndimage as ndimage
from scipy.ndimage import maximum_filter
img = cv2.imread("pic.tif",0)
s = np.fft.fftshift(np.fft.fft2(img))
intensity = 20 * np.log(np.abs(s))
maxs = maximum_filter(intensity, 125)
maxs[maxs < intensity] = intensity.max()
ret, thresh = cv2.threshold(maxs.astype('uint8'),0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
imshow(thresh)
PS: So I have another question, it could be useful for some of you. The maximum_filter function gave me the "3 squares"(then I'll get a better visualization of them by using thresholding), so is there a way to use the maximum_filter function and to obtain "3 circles"? Then we can use HoughCircles to obtain the 3 centers circles.
You may need to use Image Moments.
As the pre-processing steps, threshold the source to create mask of squares, and then pass to findcontours.