Detecting blobs with scikit - python

I am trying to detect the large yellow and red circles (not the small ones at the bottom) using scikit. It seems to be very accurate for the red circles (after filtering for larger radii), but it can't seem to detect the yellow circles.
What I've tried, based on this. I am only interested in the array of x, y, and radius, so the code doesn't need the image with the circles overlaying.
I've tried the three methods in the scikit example, but found the doh (Determinant of Hessian) worked best for at least identifying the red circles, as the other 2 methods didn't work for them.
I've also tried using the scikit Hough circles from here, but the same problem exists where it doesn't detect the yellow circles.
from skimage import data, io
from skimage.feature import blob_doh
from skimage.color import rgb2gray
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
image = io.imread("2-9.jpg")
image_gray = rgb2gray(image)
blobs_doh = blob_doh(image_gray, max_sigma=30, threshold=.01)
df_doh = pd.DataFrame(blobs_doh[:, :], columns=["y", "x", "radius"])
df_doh.to_csv('doh.csv')
I then imported the data as a csv and plotted using R (to show accuracy)
df <- fread('doh.csv')
library(imager)
im <- load.image("2-9.jpg")
plot(im)
points(df$x, df$y)
df_filtered <- filter(df, radius >= 4.22) #radius of any less gives too many points
plot(im)
points(df_filtered$x, df_filtered$y)

Probably the yellow blobs are too lightly colored to be picked by blob_doh. Since you appear to have strong prior knowledge with these images (exact yellow and exact red, based on my color picker), you can make an image with just the target points:
from skimage import io, util
image = util.img_as_float(io.imread("2-9.jpg"))
t = 0.001 # tolerance of deviation from exact color
blobs = ((image - [1, 0, 0])**2 < t) # distance from red less than t
| (image - [1, 1, 0])**2 < t)) # distance from yellow
blobs_float = blobs.astype(float) # convert from boolean to 1.0/0.0
Then, use blob_doh on the blobs_float image.
Hope this helps!

Related

How to delete entries below threshold in class 'imantics.annotation.Polygons'?

I have greyscale images with features of interest displayed as grey and white, and background as black.
I am trying to draw polygons around the features of interest.
My problem is that polygons are drawn e.g. around the edge of the images as well (input image). In the code below I have tried to filter out these "false positive" features of interest using gaussian blur and morphological operations (see code below),
import cv2
import matplotlib.pyplot as plt
import numpy as np
from imantics import Polygons, Mask
import imantics as imcs
import skimage
from shapely.geometry import Polygon as Pollygon
import matplotlib.image as mpimg
import PIL
mask = cv2.imread('mask.jpg',64)
print(mask.max())
print(mask.min())
# Apply gaussian blur filter
mask = cv2.GaussianBlur(mask,(9,9),0)
mask = cv2.GaussianBlur(mask,(9,9),0)
mask = cv2.GaussianBlur(mask,(9,9),0)
mask = cv2.GaussianBlur(mask,(9,9),0)
mask = cv2.GaussianBlur(mask,(9,9),0)
ellipseFootprint = skimage.morphology.footprints.ellipse(1, 1)
squareFootprint = skimage.morphology.footprints.square(8)
maskMorph = mask
for i in range(10):
maskMorph = skimage.morphology.erosion(maskMorph, footprint=ellipseFootprint, out=None)
print(i)
for k in range(2):
maskMorph = skimage.morphology.dilation(maskMorph, footprint=None, out=None)
print(k)
polygons = Mask(maskMorph).polygons()
print(len(polygons.segmentation))
print(type(polygons))
print(polygons.segmentation)
newPoly = polygons.draw(mask, color=[255, 255, 0],
thickness=3)
cv2.imshow("title", newPoly)
cv2.waitKey()
Indeed, I have tried to "filter" out smaller features/polygons and "false positive" features of interest in images using gaussian blur filter and morphological operations, but I am struggling with getting rid of all (see output image).
My thinking is therefore to add a minimum (size) threshold for the features/polygons in the image to be kept.
I have started on the following, but am not sure how to progress.
lengthPolySeg = len(polygons.segmentation)
for l in range(lengthPolySeg-1):
if len(polygons.segmentation[l]) < 50:
Any advise would be most appreciated.

Image segmentation to find cells in biological images

I have a bunch of images of cells and I want to extract where the cells are. I'm currently using circular Hough transforms and it works alright, but screws up regularly. Wondering if people have any pointers. Sorry this isn't a question specifically about software - it's how to get better performance in tis image segmentation problem.
I've tried other stuff in skimage with limited success, like the contour finding, edge detection and active contours. Nothing worked well out of the box, although it could just be that I didn't fiddle with the parameters correctly. I haven't done much image segmentation, and I don't really know how this stuff works or what the best ways are to jury-rig it.
Here is the code I currently am using that takes a grayscale image as a numpy array and looks for the cell as a circle:
import cv2
import numpy as np
smallest_dim = min(img.shape)
min_rad = int(img.shape[0]*0.05)
max_rad = int(img.shape[0]*0.5) #0.5
circles = cv2.HoughCircles((img*255).astype(np.uint8),cv2.HOUGH_GRADIENT,1,50,
param1=50,param2=30,minRadius=min_rad,maxRadius=max_rad)
circles = np.uint16(np.around(circles))
x, y, r = circles[0,:][:1][0]
Here is an example where the code found the wrong circle as the boundary of the cell. It seems like it got confused by the gunk that is surrounding the cell:
I think one issue may be the plotting of circle (coordinates may be wrong).
Also, like #Nicos mentioned, there is alot of tweaking involved with traditional image processing to make specific cases work (while more recent machine learning approaches, the tweaking is so that models do not over-train), my attempt with skimage is displayed below. Radius range, number of circles, edge detection image, all needs to be tweaked... given the potential variation among and within images. Within this image, there are, at least to me, 3 circles with varying gradient, from the canny edge detection image, you can sort of see we are getting more than 3 circles, further, the "illumination" seems to vary at different locations (due to this being an sem image)?!
import matplotlib.pyplot as plt
import numpy as np
import imageio
from skimage import data, color
from skimage.transform import hough_circle, hough_circle_peaks
from skimage.feature import canny
from skimage.draw import circle_perimeter
from skimage.util import img_as_ubyte
!wget https://i.stack.imgur.com/2tsWw.jpg
# rgb to gray https://stackoverflow.com/a/51571053/868736
im = imageio.imread('2tsWw.jpg')
gray = lambda rgb : np.dot(rgb[... , :3] , [0.299 , 0.587, 0.114])
gray = gray(im)
image = np.array(gray[60:220,210:450])
plt.imshow(image,cmap='gray')
edges = canny(image, sigma=3,)
plt.imshow(edges,cmap='gray')
overlayimage = np.copy(image)
# https://scikit-image.org/docs/dev/auto_examples/edges/plot_circular_elliptical_hough_transform.html
hough_radii = np.arange(30, 60, 2)
hough_res = hough_circle(edges, hough_radii)
# Select the most prominent X circles
x=1
accums, cx, cy, radii = hough_circle_peaks(hough_res, hough_radii,
total_num_peaks=x)
# Draw them
fig, ax = plt.subplots(ncols=1, nrows=1, figsize=(10, 4))
#image = color.gray2rgb(image)
for center_y, center_x, radius in zip(cy, cx, radii):
circy, circx = circle_perimeter(center_y, center_x, radius)
overlayimage[circy, circx] = 255
print(radii)
ax.imshow(overlayimage,cmap='gray')
plt.show()

Feature extraction and take color histogram

I am working on an image processing feature extraction. I have a photo of a bird in which I have to extract bird area and tell what color the bird has. I used canny feature extraction method to get the edges of a bird.
How to extract only bird area and make the background to blue color?
openCv solution should also be fine.
import skimage
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import os
filename = os.path.join(os.getcwd(),'image\image_bird.jpeg')
from skimage import io
bird =io.imread(filename,as_grey=True)
plt.imshow(bird)
from skimage import feature
edges = feature.canny(bird,sigma=1)
plt.imshow(edges )
Actual bird image can be taken from bird link
Identify the edges of your image
Binarize the image via automatic thresholding
Use contour detection to identify black regions which are inside a white region and merge them with the white region. (Mockup, image may slightly vary)
Use the created image as mask to color the background and color it
This can be done by simply setting each background pixel (black) to its respective color.
As you can see, the approach is far from perfect, but should give you a general idea about how to accomplish your task. The final image quality might be improved by slightly eroding the map to tighten it to the contours of the bird. You then also use the mask to calculate your color histogram by only taking foreground pixels into account.
Edit: Look here:
Eroded mask
Final image
According to this article https://www.pyimagesearch.com/2016/04/11/finding-extreme-points-in-contours-with-opencv/
and this question CV - Extract differences between two images
I wrote some python code as below. As my predecessor said it is also far from perfect. The main disadvantages of this code are constants value to set manually: minThres (50), maxThres(100), dilate iteration count and erode iteration count.
import cv2
import numpy as np
windowName = "Edges"
pictureRaw = cv2.imread("bird.jpg")
## set to gray
pictureGray = cv2.cvtColor(pictureRaw, cv2.COLOR_BGR2GRAY)
## blur
pictureGaussian = cv2.GaussianBlur(pictureGray, (7,7), 0)
## canny edge detector - you must specify threshold values
pictureCanny = cv2.Canny(pictureGaussian, 50, 100)
## perform a series of erosions + dilations to remove any small regions of noise
pictureDilate = cv2.dilate(pictureCanny, None, iterations=20)
pictureErode = cv2.erode(pictureDilate, None, iterations=5)
## find the nozero regions in the erode
imask2 = pictureErode>0
## create a Mat like pictureRaw
canvas = np.full_like(pictureRaw, np.array([255,0,0]), dtype=np.uint8)
## set mask
canvas[imask2] = pictureRaw[imask2]
cv2.imwrite("result.png", canvas)

Python: How to keep region inside canny close edge's area

I'm using canny algorithm to find the edges.
Next, I want to keep the region inside the closed curves.
My code sample is:
import cv2
import numpy as np
from matplotlib import pyplot as plt
import scipy.ndimage as nd
from skimage.morphology import watershed
from skimage.filters import sobel
img1 = cv2.imread('coins.jpg')
img = cv2.imread('coins.jpg',0)
edges= cv2.Canny(img,120,200)
markers = np.zeros_like(img)
markers[edges<50] = 0
markers[edges==255] = 1
img1[markers == 1] = [0,0,255]
img1[markers == 0] = [255,255,255]
cv2.imshow('Original', img)
cv2.imshow('Canny', img1)
#Wait for user to press a key
cv2.waitKey(0)
My output image is
I want to show the original pixels values inside the coins. Is that possible?
I suggest you use an union-find structure to get the connected components of white pixels of your img1. (You might want to find the details of this algorithm on Wikipedia : https://en.wikipedia.org/wiki/Disjoint-set_data_structure).
Once you have the connected components, my best idea is to consider the conected components that do not contain any point on the border of your picture (they should correspond to the interior of your coins) and color them in the color of img.
Sure, you may have some kind of triangles between your coins that will still be colored, but you could remove the corresponding connected components by hand.
Not really. The coin outlines are not continuous so that any kind of filling will leak.
You can repair the edges by some form of morphological processing (erosion), but this will bring the coins in contact and create unreachable regions between them.
As a fallback solution, you can try a Hough circle detector and mask inside the disks.

Image Analysis: Finding proteins in an image

I am attempting to write a program that will automatically locate a protein in an image, this will ultimately be used to differentiate between two proteins of different heights that are present.
The white area on top of the background is a membrane in which the proteins sit and the white blobs that are present are the proteins. The proteins have two lobes hence they appear in pairs (actually one protein).
I have been writing a script in Fiji (Jython) to try and locate the proteins so we can work out the height from the local background. This so far involves applying an adaptive histogram equalisation and then subtracting the background with a rolling ball of radius 10 pixels. After that I have been applying a kernel of sorts which is 10 pixels by 10 pixels and works out the average of the 5 centre pixels and divides it by the average of the pixels on the 4 edges of the kernel to get a ratio. if the ratio is above a certain value then it is a candidate.
the output I got was this image which apart from some wrapping and sensitivity (ratio=2.0) issues seems to be ok. My questions are:
Is this a reasonable approach or is there an obviously better way of doing this?
Can you suggest a way on from here? I am a little stuck now and not really sure how to proceed.
code if necessary: http://pastebin.com/D45LNJCu
Thanks!
Sam
How about starting off a bit more simple and using the Harris-point approach and detect local maxima. Eg.
import numpy as np
import Image
from scipy import ndimage
import matplotlib.pyplot as plt
roi = 2.5
peak_threshold = 120
im = Image.open('Q766c.png');
image = im.copy()
size = 2 * roi + 1
image_max = ndimage.maximum_filter(image, size=size, mode='constant')
mask = (image == image_max)
image *= mask
# Remove the image borders
image[:size] = 0
image[-size:] = 0
image[:, :size] = 0
image[:, -size:] = 0
# Find peaks
image_t = (image > peak_threshold) * 1
# get coordinates of peaks
f = np.transpose(image_t.nonzero())
# Show
img = plt.imshow(np.asarray(im))
plt.plot(f[:, 1], f[:, 0], 'o', markeredgewidth=0.45, markeredgecolor='b', markerfacecolor='None')
plt.axis('off')
plt.savefig('local_max.png', format='png', bbox_inches='tight')
plt.show()
Which gives this:
ImageJ "Find maxima" does also similar.
Here is the Jython code
from ij import ImagePlus, IJ, Prefs
from ij.plugin import RGBStackMerge
from ij.process import ImageProcessor, ImageConverter
from ij.plugin.filter import Binary, MaximumFinder
from jarray import array
# define background is black (0)
Prefs.blackBackground = True
# find maxima
#imp = IJ.getImage()
imp = ImagePlus('http://i.stack.imgur.com/Q766c.png')
ImageConverter(imp).convertToGray8()
ip = imp.getProcessor()
segip = MaximumFinder().findMaxima( ip, 10, 200, MaximumFinder.SINGLE_POINTS , False, False)
# display detection result
binner = Binary()
binner.setup("dilate", None)
binner.run(segip)
segimp = ImagePlus("seg", segip)
mergeimp = RGBStackMerge.mergeChannels(array([segimp, imp, None, None, None, None, None], ImagePlus), True)
mergeimp.show()
EDIT: Updated the code to allow processing PNG image (RGB), and directly loading image from this thread. See comments for more details.

Categories