Find positions of objects on the noisy image - python

I have a bunch of images and I need to determine positions of crosses for further transformation of the image and the alignment procedure. The problem is that images are quite noisy and I'm new to all these things of computer vision. Generally, I'm trying to solve the task via opencv and python. I have tried several approaches described in the tutorial of opencv library but I did not get the appropriate result.
Consider: I need to determine the exact positions of centers of the crosses (which I can do with about pixel accuracy by hand). The best result I have obtained via findContours function. I have adopted code from the tutorial and I got:
import numpy as np
import cv2 as cv
import matplotlib.pyplot as plt
import random
random.seed(42)
img = cv.imread("sample.png")
img_gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
img_gray = cv.blur(img_gray, (3,3))
threshold = 150
dst = cv.Canny(img_gray, threshold, threshold * 2)
_, contours, hierarchy = cv.findContours(dst, cv.RETR_EXTERNAL, cv.CHAIN_APPROX_SIMPLE)
result = np.zeros((dst.shape[0], dst.shape[1], 3), dtype=np.uint8)
for i in range(len(contours)):
color = (random.randint(0, 256), random.randint(0, 256), random.randint(0, 256))
cv.drawContours(result, contours, i, color, 2, cv.LINE_8, hierarchy, 0)
cv.imwrite("result.png", result)
fig, ax = plt.subplots()
fig.set_size_inches(10, 10);
ax.imshow(result, interpolation='none', cmap='gray');
which results in: Now I'm confused with the following steps. How can I define which contour is cross and which is not? What to do with crosses consisting of multiple contours?
Any help is really appreated!

A simple way on which you can determine what is a cross and what isn't is by making a bouning box x,y,w,h = cv2.boundingRect(cnt) over each contour and selecting those that have h (height) and w (weight) bigger than treshold you provide. If you observe the noises on the image arent as big as the crosses.
I have also made an example on how I would try to tackle such a task. You can try denoising the image by performing histogram equalization followed by thresholding with OTSU threshold and performing an opening to the threshold (erosion followed by dilation). Then you can filter out crosses with height and weight of the contour and then calculate the middle points of every bounding box of the contours that is in the mentioned criteria. Hope it helps a bit. Cheers!
Example:
import cv2
import numpy as np
img = cv2.imread('croses.png')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
equ = cv2.equalizeHist(gray)
_, thresh = cv2.threshold(equ,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
kernel = np.ones((2,2),np.uint8)
opening = cv2.morphologyEx(thresh,cv2.MORPH_OPEN,kernel, iterations = 2)
_, contours, hierarchy = cv2.findContours(opening,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
for cnt in contours:
x,y,w,h = cv2.boundingRect(cnt)
if w > 40 and h > 40:
cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),2)
cv2.circle(img,(int(x+(w/2)), int(y+(h/2))),3,(0,0,255),-1)
cv2.imshow('img', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result:

Related

Red dot coordinates detection

I'm trying to follow movement of a part using red dots. I tried with white dots and thresholding before, but there is too much reflection from the smartphone I'm using. The plan is to recognize a dot as a contour, find the center and fill the array with the coordinates of all contour centers for further calculation.
The code is posted bellow, it recognizes the correct number of dots, but I get the division by zero error. Does anyone know what I'm doing wrong?
Image:https://imgur.com/a/GLXGCPP
import cv2
import numpy as np
from matplotlib import pyplot as plt
import imutils
#load image
img = cv2.imread('dot4_red.jpg')
#apply median blur, 15 means it's smoothing image 15x15 pixels
blur = cv2.medianBlur(img,15)
#convert to hsv
hsv = cv2.cvtColor(blur, cv2.COLOR_BGR2HSV)
#color definition
red_lower = np.array([0,0,240])
red_upper = np.array([10,10,255])
#red color mask (sort of thresholding, actually segmentation)
mask = cv2.inRange(hsv, red_lower, red_upper)
#copy image for, .findContours distorts the source image
mask_copy = mask.copy()
#find contours
cnts = cv2.findContours(mask_copy,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
#extract contours from the list??
cnts = imutils.grab_contours(cnts)
#count number of conoturs of specific size
s1 = 500
s2 = 10000
xcnts = []
for cnt in cnts:
if s1<cv2.contourArea(cnt)<s2:
xcnts.append(cnt)
n = len(xcnts)
#pre-allocate array for extraction of centers of contours
s = (n,2)
array = np.zeros(s)
#fill array of center coordinates
for i in range(0,n):
cnt = cnts[i]
moment = cv2.moments(cnt)
c_x = int(moment["m10"]/moment["m00"])
c_y = int(moment["m01"]/moment["m00"])
array[i,:] = [c_x, c_y]
#display image
cv2.namedWindow('image', cv2.WINDOW_NORMAL)
cv2.imshow('image', mask)
cv2.waitKey(0) & 0xFF
cv2.destroyAllWindows()
#print results
print ('number of dots, should be 4:',n)
print ('array of dot center coordinates:',array)
The problem was the wrong color range. Because of this, there were holes in the mask of the circles. Due to division by zero. M00. You can choose the correct color range or pre-fill the holes in the mask. Or use this code:
import cv2
import numpy as np
#load image
img = cv2.imread('ZayrMep.jpg')
#apply median blur, 15 means it's smoothing image 15x15 pixels
blur = cv2.medianBlur(img,15)
#convert to hsv
hsv = cv2.cvtColor(blur, cv2.COLOR_BGR2HSV)
#color definition
red_lower = np.array([160,210,230])
red_upper = np.array([180,255,255])
#red color mask (sort of thresholding, actually segmentation)
mask = cv2.inRange(hsv, red_lower, red_upper)
connectivity = 4
# Perform the operation
output = cv2.connectedComponentsWithStats(mask, connectivity, cv2.CV_32S)
# Get the results
num_labels = output[0]-1
centroids = output[3][1:]
#print results
print ('number of dots, should be 4:',num_labels )
print ('array of dot center coordinates:',centroids)
moments00 (area) can be 0 for some shapes according to cv documentation. This is probably what is happening here:
Note Since the contour moments are computed using Green formula, you
may get seemingly odd results for contours with self-intersections,
e.g. a zero area (m00) for butterfly-shaped contours.
From: https://docs.opencv.org/3.4/d8/d23/classcv_1_1Moments.html#a8b1b4917d1123abc3a3c16b007a7319b
You need to ensure the area (m00) is not 0 before using it for division.

Separating cell boundaries in the following image and also do nuclear counting using python

I tried to use watershed with Otsu for thresholding but its only picking up nuclear boundaries,I want to segment cells boundaries
I used Otsu followed by noise removal with opening ,identifying sure background, applying distance transform, using it to define sure foreground, defining unknown, creating markers
import cv2
import numpy as np
img = cv2.imread("images/bio_watershed/Osteosarcoma_01.tif")
cells=img[:,:,0]
#Threshold image to binary using OTSU. ALl thresholded pixels will be set
#to 255
ret1, thresh = cv2.threshold(cells, 0, 255,
cv2.THRESH_BINARY+cv2.THRESH_OTSU)
# Morphological operations to remove small noise - opening
kernel = np.ones((3,3),np.uint8)
opening = cv2.morphologyEx(thresh,cv2.MORPH_OPEN,kernel,
iterations = 2)
# finding sure background
sure_bg = cv2.dilate(opening,kernel,iterations=10)
#applying dustance transform
dist_transform = cv2.distanceTransform(opening,cv2.DIST_L2,5)
ret2, sure_fg
=cv2.threshold(dist_transform,0.5*dist_transform.max(),255,0)
# Unknown region
sure_fg = np.uint8(sure_fg)
unknown = cv2.subtract(sure_bg,sure_fg)
#Now we create a marker and label the regions inside.
ret3, markers = cv2.connectedComponents(sure_fg)
#add 10 to all labels so that sure background is not 0, but 10
markers = markers+10
# Now, mark the region of unknown with zero
markers[unknown==255] = 0
#applying watershed
markers = cv2.watershed(img,markers)
# color boundaries in yellow.
img[markers == -1] = [0,255,255]
img2 = color.label2rgb(markers, bg_label=0)
cv2.imshow('Overlay on original image', img)
cv2.imshow('Colored Cells', img2)
cv2.waitKey(0)
by running this code I get following nuclear boundary segmentation but I want to get the cell boundaries
Thanks so much for your help
I'm not sure if you are still looking for an answer but I have edited your code to segment the cell boundaries. You need to select the image slice that shows the actin filaments, which is in index 1.
I have also used an edge detector followed by contour drawing to outline the cell boundaries.
Here is my code:
import cv2
import numpy as np
import skimage.io as skio
img = cv2.imread("cells.png")
img_read = img[:,:,1] #Shows the actin filaments
#Denoising
img_denoise = cv2.fastNlMeansDenoising(img_read, 45, 7, 35)
#Canny edge detector
img_canny = cv2.Canny(img_denoise, 10, 200)
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(2,2))
img_dilate = cv2.dilate(img_canny, kernel, iterations = 2)
#Contour finding
contours, _ = cv2.findContours(img_dilate, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
img_med = cv2.cvtColor(img_denoise, cv2.COLOR_GRAY2RGB)
img_final = cv2.drawContours(img_med, contours, -1, (0,128,128), 2, 4)
skio.imsave("img_output.tif", img_final)
cv2.imshow('Overlay on original image', img_final)
cv2.waitKey(0)
The example you have is good for color-based segmentation as it is (better resolution will improve the result though).
Contrast is good enough (and can be improved), so did a very quick test without using OpenCV (so no code to share).
Nuclear boundary:
Cell boundary:
Combined:
Or as a separate masks:
So I'd say it is all about delta E and proper segmentation.

Extracting components from an edge image and storing for further processing

1Input
Given an edge image, I want to retrieve components in it one by one and store each component as an image so that I can use it later for processing. I guess this is called connected component labeling.
For example, in the input image, there are 2 lines , 1 circle ,2 curves
I want to 5 image files containing these 5 components.
I was able to come up with code as below, but I do not know how to proceed further. Currently I am getting all the components coloured in different colours in output.
import scipy
from skimage import io
from scipy import ndimage
import matplotlib.pyplot as plt
import cv2
import numpy as np
fname='..//Desktop//test1.png'
img = cv2.imread(fname)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
#canny
img_canny = cv2.Canny(img,100,200)
threshold = 50
# find connected components
labeled, nr_objects = ndimage.label(img_canny)
print('Number of objects is %d'% nr_objects)
plt.imsave('..//Desktop//out.png', labeled)
Output
New output
You may not need to use cv2.canny() to segment the contours, you can simply use binary thresholding technique as:
img = cv2.imread("/path/to/img.png")
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(img_gray, 130, 255, cv2.THRESH_BINARY_INV)
# Opencv v3.x
im, contours, hierarchy = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
for i in xrange(len(contours)):
rect = cv2.boundingRect(contours[i])
contour_component = img[rect[1]:rect[1] + rect[3], rect[0]:rect[0] + rect[2]]
cv2.imwrite("component_{}.png".format(i), contour_component)
This:
num=nr_objects
i=0
while i<num:
plt.imshow(labeled)
i=i+1
Does not loop over different labels, it just shows the same image num times. You need to do something like:
for i in range(num):
tmp = np.zeros(labeled.shape)
tmp[labeled == i] = 255
plt.imshow(tmp)
Then you will see one for each label. Also you can use a for loop... If you have any questions leave a comment.

Is there a function similar to OpenCV findContours that detects curves and replaces points with a spline?

I am trying to take the below image, trace the white shape, and export the resulting path to pdf. The problem I have is that findContours seeming only finds points along the edge of the shape. Is there a solution out there, similar to findContours, that detects curves in a shape and replaces its points with a spline wherever there is a curve? If I use scipy.interpolate it ignores straight lines and turns the entire contour into one big curved shape, which is no good either. I need something that does both things.
import numpy as np
import cv2
from scipy.interpolate import splprep, splev
from pyx import *
import matplotlib.pyplot as plt
#read in image file
original = cv2.imread('test.jpg')
#blur the image to smooth edges
im = cv2.medianBlur(original,5)
#threshold the image
imgray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
ret,thresh = cv2.threshold(imgray,170,255,cv2.THRESH_BINARY)
#findContours
im2, contours, hierarchy = cv2.findContours(thresh,cv2.RETR_EXTERNAL,cv2.CHAIN_\
APPROX_SIMPLE)
#drawContours
cv2.drawContours(original, [approx], -1, (0,255,0), 3)
cv2.imshow("Imageee", original)
cv2.waitKey(0)
Except using cv2.findContours with flag cv2.CHAIN_APPROX_SIMPLE to approx the contours, we can do it manually.
use cv2.findContours with flag cv2.CHAIN_APPROX_NONE to find contours.
use cv2.arcLength to calculate the contour length.
use cv2.approxPoolyDP to approx the contour manually with epsilon = eps * arclen.
Here is one of the results when eps=0.005:
More results:
#!/usr/bin/python3
# 2018.01.04 13:01:24 CST
# 2018.01.04 14:42:58 CST
import cv2
import numpy as np
import os
img = cv2.imread("test.jpg")
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret,threshed = cv2.threshold(gray,170,255,cv2.THRESH_BINARY)
# find contours without approx
cnts = cv2.findContours(threshed,cv2.RETR_LIST,cv2.CHAIN_APPROX_NONE)[-2]
# get the max-area contour
cnt = sorted(cnts, key=cv2.contourArea)[-1]
# calc arclentgh
arclen = cv2.arcLength(cnt, True)
# do approx
eps = 0.0005
epsilon = arclen * eps
approx = cv2.approxPolyDP(cnt, epsilon, True)
# draw the result
canvas = img.copy()
for pt in approx:
cv2.circle(canvas, (pt[0][0], pt[0][1]), 7, (0,255,0), -1)
cv2.drawContours(canvas, [approx], -1, (0,0,255), 2, cv2.LINE_AA)
# save
cv2.imwrite("result.png", canvas)
I think your problem actually consists of two issues.
The first issue is to extract the contour, which you can achieve using teh findContour function:
import numpy as np
print cv2.__version__
rMaskgray = cv2.imread('test.jpg', 0)
(thresh, binRed) = cv2.threshold(rMaskgray, 200, 255, cv2.THRESH_BINARY)
_, Rcontours, hier_r = cv2.findContours(binRed,cv2.RETR_CCOMP,cv2.CHAIN_APPROX_SIMPLE)
r_areas = [cv2.contourArea(c) for c in Rcontours]
max_rarea = np.argmax(r_areas)
CntExternalMask = np.ones(binRed.shape[:2], dtype="uint8") * 255
contour= Rcontours[max_rarea]
cv2.drawContours(CntExternalMask,[contour],-1,0,1)
print "These are the contour points:"
print c
print
print "shape: ", c.shape
for p in contour:
print p[0][0]
cv2.circle(CntExternalMask, (p[0][0], p[0][1]), 5, (0,255,0), -1)
cv2.imwrite("contour.jpg", CntExternalMask)
cv2.imshow("Contour image", CntExternalMask)
cv2.waitKey(0)
If you execute the program, the contour points are printed as a list of point coordinates.
The contour approximation method you choose influences the interpolation which is actually used (and the number of points found), as described here. I have added small dots at the points found with the approximation method cv2.CHAIN_APPROX_SIMPLE. You see that the straight lines are already approximated.
I may not fully have understood your second step, though. You want to omit some of those points, replacing point lists partially by splines. There might be different way to do this, depending on your final intention. Do you just want to replace the straight lines? If you replace curved parts, what is the margin of error you are allowing?
# import the necessary packages
import numpy as np
import argparse
import glob
import cv2
#For saving pdf
def save_pdf(imagename):
import img2pdf
# opening from filename
with open("output.pdf","wb") as f:
f.write(img2pdf.convert(imagename))
#for fouind biggest contours
def bigercnt(contours):
max_area=0
cnt=[]
for ii in contours:
area=cv2.contourArea(ii)
if area>max_area:
cnt = ii
return cnt
#STARTING
print ("Reading img.jpg file")
# load the image, convert it to grayscale, and blur it slightly
image = cv2.imread('img.jpg')
image = cv2.resize(image, (0,0), fx=0.5, fy=0.5)
print ("Converting it gray scale")
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
print ("Bluring")
blurred = cv2.GaussianBlur(gray, (3, 3), 0)
print ("Looking for edges" )
# apply Canny edge detection using a wide threshold, tight
# threshold, and automatically determined threshold
tight = cv2.Canny(blurred, 255, 250)
print ("Looking for contours")
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (10, 10))
close = cv2.morphologyEx(tight, cv2.MORPH_CLOSE, kernel)
_,contours, hierarchy = cv2.findContours( close.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
print("Looking for big contour")
cnt = bigercnt(contours)
print ("Cropping found contour")
x,y,w,h = cv2.boundingRect(cnt)
croped_image = image[y:y+h,x:x+w]
img2 = np.zeros((h,w,4),np.uint8)
print ("Taking only pixels in countour and creating png")
for i in range(h):
for j in range(w):
#print (x+j, y+i)
#print cv2.pointPolygonTest(cnt, (x+j, y+i), False)
if cv2.pointPolygonTest(cnt, (x+j, y+i), False)==1:
#print True
img2[i,j] = [croped_image[i, j][0],croped_image[i, j][1],croped_image[i, j][2],255]
else:
img2[i,j] = [255,255,255,0]
print ("Showing output image")
# Show the output image
#cv2.imshow('croped', croped_image)
cv2.imshow('output', img2)
params = list()
params.append(cv2.IMWRITE_PNG_COMPRESSION)
params.append(8)
print ("Saving output image")
cv2.imwrite("output.png",img2,params)
print ("Finish:converted")
cv2.waitKey(0)
cv2.destroyAllWindows()

Improve contour detection with OpenCV (Python)

I am trying to identify cards from a photo. I managed to do what I wanted on ideal photos, but I am now having hard time applying the same procedure with slightly different lighting, etc. So the question is about making the following contour detection more robust.
I need to share a big part of my code for the takers to be able to make the images of interest, but my question relates only to the last block and image.
import numpy as np
import cv2
from matplotlib import pyplot as plt
from mpl_toolkits.axes_grid1 import ImageGrid
import math
img = cv2.imread('image.png')
img = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
plt.imshow(img)
Then the cards are detected:
# Prepocess
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray,(1,1),1000)
flag, thresh = cv2.threshold(blur, 120, 255, cv2.THRESH_BINARY)
# Find contours
contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
contours = sorted(contours, key=cv2.contourArea,reverse=True)
# Select long perimeters only
perimeters = [cv2.arcLength(contours[i],True) for i in range(len(contours))]
listindex=[i for i in range(15) if perimeters[i]>perimeters[0]/2]
numcards=len(listindex)
# Show image
imgcont = img.copy()
[cv2.drawContours(imgcont, [contours[i]], 0, (0,255,0), 5) for i in listindex]
plt.imshow(imgcont)
The perspective is corrected:
#plt.rcParams['figure.figsize'] = (3.0, 3.0)
warp = range(numcards)
for i in range(numcards):
card = contours[i]
peri = cv2.arcLength(card,True)
approx = cv2.approxPolyDP(card,0.02*peri,True)
rect = cv2.minAreaRect(contours[i])
r = cv2.cv.BoxPoints(rect)
h = np.array([ [0,0],[399,0],[399,399],[0,399] ],np.float32)
approx = np.array([item for sublist in approx for item in sublist],np.float32)
transform = cv2.getPerspectiveTransform(approx,h)
warp[i] = cv2.warpPerspective(img,transform,(400,400))
# Show perspective correction
fig = plt.figure(1, (10,10))
grid = ImageGrid(fig, 111, # similar to subplot(111)
nrows_ncols = (4, 4), # creates 2x2 grid of axes
axes_pad=0.1, # pad between axes in inch.
aspect=True, # do not force aspect='equal'
)
for i in range(numcards):
grid[i].imshow(warp[i]) # The AxesGrid object work as a list of axes.
That were I am having my problem. I want to detect the contour of the shapes. The best way I found is using a combination of bilateralFilter and AdaptativeThreshold on a gray image:
fig = plt.figure(1, (10,10))
grid = ImageGrid(fig, 111, # similar to subplot(111)
nrows_ncols = (4, 4), # creates 2x2 grid of axes
axes_pad=0.1, # pad between axes in inch.
aspect=True, # do not force aspect='equal'
)
for i in range(numcards):
image2 = cv2.bilateralFilter(warp[i].copy(),10,100,100)
grey = cv2.cvtColor(image2,cv2.COLOR_BGR2GRAY)
grey2 = cv2.cv.AdaptiveThreshold(cv2.cv.fromarray(grey), cv2.cv.fromarray(grey), 255, cv2.cv.CV_ADAPTIVE_THRESH_MEAN_C, cv2.cv.CV_THRESH_BINARY, blockSize=31, param1=6)
grid[i].imshow(grey,cmap=plt.cm.binary)
This is very close to what I would like, but how can I improve it to get closed contours in white, and everything else in black?
Why not just use Canny and apply perspective correction after finding the contours (because it seems to blur the edges)? For example, using the small image you provided in your question (the result could be better on a bigger one):
Based on some parts of your code:
import numpy as np
import cv2
import math
img = cv2.imread('image.bmp')
# Prepocess
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
flag, thresh = cv2.threshold(gray, 120, 255, cv2.THRESH_BINARY)
# Find contours
img2, contours, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
contours = sorted(contours, key=cv2.contourArea, reverse=True)
# Select long perimeters only
perimeters = [cv2.arcLength(contours[i],True) for i in range(len(contours))]
listindex=[i for i in range(15) if perimeters[i]>perimeters[0]/2]
numcards=len(listindex)
card_number = -1 #just so happened that this is the worst case
stencil = np.zeros(img.shape).astype(img.dtype)
cv2.drawContours(stencil, [contours[listindex[card_number]]], 0, (255, 255, 255), cv2.FILLED)
res = cv2.bitwise_and(img, stencil)
cv2.imwrite("out.bmp", res)
canny = cv2.Canny(res, 100, 200)
cv2.imwrite("canny.bmp", canny)
First, remove everything except a single card for simplicity, then apply Canny edge detector:
Then you can dilate/erode, correct perspective, remove the largest contour etc.
Except for the image in the bottom right corner, the following steps should generally work:
Dilate and erode the binary masks to bridge any one or two pixels gaps between contour fragments.
Use maximal supression to turn your thick binary masks along the boundary of your shapes into thin edges.
As used earlier in the pipeline, use cvFindcontours to identify closed contours. Each contour identified by the method can be tested for being closed.
As a general solution to such problems, I would advise you to try my algorithm to find closed contours around a given point. Check active segmentation with fixation

Categories