Finding the center of mass in an image - python

Here is my image:
I want to find the center of mass in this image. I can find the approximate location of the center of mass by drawing two perpendicular lines as shown in this image:
I want to find it using an image processing tool in python. I have a little experience in the image processing library of python (scikit-image) but, I am not sure if this library could help finding the center of mass in my image.
I was wondering if anybody could help me to do it. I will be happy if it is possible to find the center of mass in my image using any other library in python.
Thanks in advance for your help!

skimage.measure.regionprops will do what you want. Here's an example:
import imageio as iio
from skimage import filters
from skimage.color import rgb2gray # only needed for incorrectly saved images
from skimage.measure import regionprops
image = rgb2gray(iio.imread('eyeball.png'))
threshold_value = filters.threshold_otsu(image)
labeled_foreground = (image > threshold_value).astype(int)
properties = regionprops(labeled_foreground, image)
center_of_mass = properties[0].centroid
weighted_center_of_mass = properties[0].weighted_centroid
print(center_of_mass)
On my machine and with your example image, I get (228.48663375508113, 200.85290046969845).
We can make a pretty picture:
import matplotlib.pyplot as plt
from skimage.color import label2rgb
colorized = label2rgb(labeled_foreground, image, colors=['black', 'red'], alpha=0.1)
fig, ax = plt.subplots()
ax.imshow(colorized)
# Note the inverted coordinates because plt uses (x, y) while NumPy uses (row, column)
ax.scatter(center_of_mass[1], center_of_mass[0], s=160, c='C0', marker='+')
plt.show()
That gives me this output:
You'll note that there's some bits of foreground that you probably don't want in there, like at the bottom right of the picture. That's a whole nother answer, but you can look at scipy.ndimage.label, skimage.morphology.remove_small_objects, and more generally at skimage.segmentation.

You can use the scipy.ndimage.center_of_mass function to find the center of mass of an object.
For example, using this question's image:
wget https://i.stack.imgur.com/ffDLD.jpg
import matplotlib.image as mpimg
import scipy.ndimage as ndi
img = mpimg.imread('ffDLD.jpg')
img = img.mean(axis=-1).astype('int') # in grayscale
cy, cx = ndi.center_of_mass(img)
print(cy, cx)
228.75223713169711 197.40991592129836

You need to know about Image Moments.
Here there's a tutorial of how use it with opencv and python

Related

Py: OpenCV: How to detect if image matches another image / color in the left bottom corner?

I am trying to use opencv, whether on LinkedIn account thumbnails in the left bottom corner green-olive color / pixels is to be detectable. (opentowork hashtag)
Example image:
Piece to find:
Example without #opentowork:
I have tried to match templates, but the result is really unreliable:
import cv2
import numpy as np
import matplotlib.pyplot as plt
import imutils
original = cv2.imread('orig.jpg', 0) # Piece to find
train_img = cv2.imread('1516535608688.jpg', 0) # Example image
print(cv2.matchTemplate(train_img, original, cv2.TM_CCOEFF_NORMED).max())
Then I googled and find out how to detect coordinates:
img1 = imutils.resize(train_img)
img2 = img1[197:373,181:300] #roi of the image
indices = np.where(img2!= [0])
coordinates = zip(indices[0], indices[1])
I am using opencv like the first and the last time and have no idea (and don't need it in the future) how to proceed.

Python OpenCV - Canny borders detection

I m trying to extract borders of a sample (see figure below). The gradient between it and the air seems important so I tried to used OpenCV Canny function, but the result is not satisfying (the second figure)... How I could improve the result?
You can find the picture here : https://filesender.renater.fr/?s=download&token=887799f6-f580-4579-8f75-148be4270cb0
import numpy as np
import cv2
from scipy import signal
median_optic_decentre = cv2.imread('median_plot.tiff',0)
edges = cv2.Canny(median_optic_decentre,10,60,apertureSize = 3)
Another method of obtaining edges is using the Laplacian operator (described in the OpenCV docs here). If you apply the Laplacian operator followed by some morphological operations, specifically morphological opening, the results look a bit better (if I'm understanding your question correctly):
import cv2
import matplotlib.pyplot as plt
img = cv2.imread('median_plot.tiff')
laplacian = cv2.Laplacian(img,cv2.CV_64F)
S = cv2.getStructuringElement(cv2.MORPH_CROSS,(3,3))
morph_opened_laplacian = cv2.dilate(cv2.erode(laplacian, S), S)
plt.subplot(1,3,1)
plt.gray()
plt.title("Original")
plt.imshow(img)
plt.subplot(1,3,2)
plt.title("Laplacian")
plt.imshow(laplacian)
plt.subplot(1,3,3)
plt.title("Opened Laplacian")
plt.imshow(morph_opened_laplacian)
plt.show()
Output:

Image segmentation to find cells in biological images

I have a bunch of images of cells and I want to extract where the cells are. I'm currently using circular Hough transforms and it works alright, but screws up regularly. Wondering if people have any pointers. Sorry this isn't a question specifically about software - it's how to get better performance in tis image segmentation problem.
I've tried other stuff in skimage with limited success, like the contour finding, edge detection and active contours. Nothing worked well out of the box, although it could just be that I didn't fiddle with the parameters correctly. I haven't done much image segmentation, and I don't really know how this stuff works or what the best ways are to jury-rig it.
Here is the code I currently am using that takes a grayscale image as a numpy array and looks for the cell as a circle:
import cv2
import numpy as np
smallest_dim = min(img.shape)
min_rad = int(img.shape[0]*0.05)
max_rad = int(img.shape[0]*0.5) #0.5
circles = cv2.HoughCircles((img*255).astype(np.uint8),cv2.HOUGH_GRADIENT,1,50,
param1=50,param2=30,minRadius=min_rad,maxRadius=max_rad)
circles = np.uint16(np.around(circles))
x, y, r = circles[0,:][:1][0]
Here is an example where the code found the wrong circle as the boundary of the cell. It seems like it got confused by the gunk that is surrounding the cell:
I think one issue may be the plotting of circle (coordinates may be wrong).
Also, like #Nicos mentioned, there is alot of tweaking involved with traditional image processing to make specific cases work (while more recent machine learning approaches, the tweaking is so that models do not over-train), my attempt with skimage is displayed below. Radius range, number of circles, edge detection image, all needs to be tweaked... given the potential variation among and within images. Within this image, there are, at least to me, 3 circles with varying gradient, from the canny edge detection image, you can sort of see we are getting more than 3 circles, further, the "illumination" seems to vary at different locations (due to this being an sem image)?!
import matplotlib.pyplot as plt
import numpy as np
import imageio
from skimage import data, color
from skimage.transform import hough_circle, hough_circle_peaks
from skimage.feature import canny
from skimage.draw import circle_perimeter
from skimage.util import img_as_ubyte
!wget https://i.stack.imgur.com/2tsWw.jpg
# rgb to gray https://stackoverflow.com/a/51571053/868736
im = imageio.imread('2tsWw.jpg')
gray = lambda rgb : np.dot(rgb[... , :3] , [0.299 , 0.587, 0.114])
gray = gray(im)
image = np.array(gray[60:220,210:450])
plt.imshow(image,cmap='gray')
edges = canny(image, sigma=3,)
plt.imshow(edges,cmap='gray')
overlayimage = np.copy(image)
# https://scikit-image.org/docs/dev/auto_examples/edges/plot_circular_elliptical_hough_transform.html
hough_radii = np.arange(30, 60, 2)
hough_res = hough_circle(edges, hough_radii)
# Select the most prominent X circles
x=1
accums, cx, cy, radii = hough_circle_peaks(hough_res, hough_radii,
total_num_peaks=x)
# Draw them
fig, ax = plt.subplots(ncols=1, nrows=1, figsize=(10, 4))
#image = color.gray2rgb(image)
for center_y, center_x, radius in zip(cy, cx, radii):
circy, circx = circle_perimeter(center_y, center_x, radius)
overlayimage[circy, circx] = 255
print(radii)
ax.imshow(overlayimage,cmap='gray')
plt.show()

Thinning/Skeletenization is distorting my image

I am trying to thin this image but it keeps getting distorted.
This is my relevant code for applying the thinning. I have also tried the 'thin' function instead of 'skeletonize' but the results are similar.
from skimage.morphology import skeletonize, thin
new_im = cv2.imread(im_pth)
gray = cv2.cvtColor(new_im, cv2.COLOR_BGR2GRAY)
ske = (skeletonize(gray//255) * 255).astype(np.uint8)
cv2.imshow("image", gray)
cv2.waitKey(0)
cv2.destroyAllWindows()
My goal is to get a shape similar to this after thinning:
What am I doing wrong? I have read online that sometimes jpg files cause issues however I don't have the experience in this field to confirm that.
I'm not sure if your conversion from input image to binary is correct. Here's a version using scikit-image functions that seems to do what you want:
from skimage import img_as_float
from skimage import io, color, morphology
import matplotlib.pyplot as plt
image = img_as_float(color.rgb2gray(io.imread('char.png')))
image_binary = image < 0.5
out_skeletonize = morphology.skeletonize(image_binary)
out_thin = morphology.thin(image_binary)
f, (ax0, ax1, ax2) = plt.subplots(1, 3, figsize=(10, 3))
ax0.imshow(image, cmap='gray')
ax0.set_title('Input')
ax1.imshow(out_skeletonize, cmap='gray')
ax1.set_title('Skeletonize')
ax2.imshow(out_thin, cmap='gray')
ax2.set_title('Thin')
plt.savefig('/tmp/char_out.png')
plt.show()
From your example, and since your image is binary, I think that what you want to do is better achieved via (binary) erosion. Wikipedia explains the concept well. Intuitively (in case you don't have time to read the wikipedia link), imagine you have a binary image A, like the one you have given, and let's call A_1 the set of pixels of A that have a value of 1. Then, you define a "structuring element" K, which for example can be a square patch of size n*n. Then in pseudocode
for pixel in A_1:
center K at pixel, and call this centered version K_pixel
if(K_pixel is contained in A_1):
keep pixel
else:
discard pixel
So, this has the effect of thinning the connected component in your image.
This function is standard and is implemented in opencv, here are some python examples, and here is a link to the documentation (c++).

Finding square centers from a picture

After an Image Processing, from fft's, filters, and thresholding, I obtained the following image:
So, I'm wondering how to extract those centers. Does exist any function from OpenCV? (such as HoughCircles for detecting circles?) or Do I need to use clustering methods?
Maybe it is useful for you to know the code I used:
import cv2
import numpy as np
import scipy.ndimage as ndimage
from scipy.ndimage import maximum_filter
img = cv2.imread("pic.tif",0)
s = np.fft.fftshift(np.fft.fft2(img))
intensity = 20 * np.log(np.abs(s))
maxs = maximum_filter(intensity, 125)
maxs[maxs < intensity] = intensity.max()
ret, thresh = cv2.threshold(maxs.astype('uint8'),0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
imshow(thresh)
PS: So I have another question, it could be useful for some of you. The maximum_filter function gave me the "3 squares"(then I'll get a better visualization of them by using thresholding), so is there a way to use the maximum_filter function and to obtain "3 circles"? Then we can use HoughCircles to obtain the 3 centers circles.
You may need to use Image Moments.
As the pre-processing steps, threshold the source to create mask of squares, and then pass to findcontours.

Categories