Find contours of touching objects as two distinct contours - python

I'm wondering how to count objects that are connected to each other as two distinct objects using cv2.findContours in Python
For example this image :
contours, hierarchy = cv2.findContours(img, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
Will output one contour.
What can I do to get two contours instead ?

This can be done by converting your input image to edge image and then detecting contours. But in this case, there is a break in edge image(I tried with canny) at the intersection of the two objects as shown below.
Break in Canny:
Whereas the expected edge image should have all the pixels at the boundary as white.
Expected edge image:
Thus to get this perfect edge image, I have created an algo shared below(This algo will work only on a binary image like this with objects filled with white color).
Before using this algo, make sure that the object does not lie on the boundary, that is, all the boundary pixels of the image should be black. If not black, add a border on all sides of the image of black colour and length 1 pixel.
# Creating black image of same shape and single channel
edge = np.zeros(img.shape, dtype = np.uint8)
h, w = img.shape[:2]
# Iterating over each pixel except those at the boundary
for i in range(1, h-1):
for j in range(1, w-1):
# if current pixel is white
if img[i][j] == 255:
# roi is the image of 9 pixel from around the current pixel
roi = img[i-1:i+2, j-1:j+2].copy()
# Counting number of black pixel in the neighbourhood of current pixel
blackCount = np.sum(roi == 0)
# if a neighbouring pixel is black, then current pixel is a boundary pixel.
if blackCount > 0:
edge[i][j] = 255
After finding the edge image, get all the contours in the image:
cont, hier = cv2.findContours(edge, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
For this image, you will get 3 contours, 2 for the two objects and 1 for the two objects combined. To eliminate the contour of both the objects combined, use the hierarchy information.
# I am taking only those contours which do not have a child contour.
finalContours = np.asarray([cont[i] for i in range(len(cont)) if hier[0][i][2] == -1])
"finalContours" will have the 2 contours for the two objects.
Refer to this link for more information about the parent-child relationship of the contours

Related

How to detect columns of rectangles in mask image?

I have found contours of some rectangles in an image and created a mask as shown below. What I am trying to do is finding those two columns of rectangles as highlighted in the image.
The source image:
Columns highlighted:
Desired output:
I'm not sure if one can achieve that with simple algorithm. If those inline-vertical rectangle does not change their position, You could just define (hardcoded) ROI (region of interest) for them, based on pixel coordinates.
If not using any machine learning to solve this, defining ROI is Your best option.
Feel free to ask, if needed.
Assuming the desired columns have significantly more contours than the other columns – as shown in the given example image – simple dilation with some vertical structuring element might be sufficient to find those columns:
Load image as grayscale, get rid of JPG artifacts.
Dilation with "small" vertical structuring element to combine all contours within a column.
Opening with "large" vertical structuring element to neglect all smaller contours. The two columns in question will now have quite large contours.
For safety reasons: Again, dilation with "small" vertical structuring element. Since the columns are slightly rotated, skewed, ... single pixels might've been falsely removed by the opening.
Find remaining contours.
For each contour: Get the bounding boxes in two steps, since we have to refine the bounding box by using the original image (because of step 4).
That'd be the full Python code:
import cv2
import numpy as np
# Read image, get rid of JPG artifacts
img = cv2.imread('gXylF.jpg', cv2.IMREAD_GRAYSCALE)
img = cv2.threshold(img, 128, 255, cv2.THRESH_BINARY)[1]
# Dilating using vertical structuring element to combine contours
mask = cv2.dilate(img, kernel=np.ones((21, 1)))
# Opening using vertical structuring element to neglect small contours
mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel=np.ones((201, 1)))
# Dilating using rectangular structuring element; for safety reasons:
# Single pixels might've been falsely removed by the opening
mask = cv2.dilate(mask, kernel=np.ones((11, 11)))
# Find remaining contours w.r.t. the OpenCV version
cnts = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
# Iterate remaining contours
for i, cnt in enumerate(cnts):
# Get bounding rectangle of contour, and region of interest (ROI)
(x, y, w, h) = cv2.boundingRect(cnt)
roi = img[y:y+h, x:x+w]
# Get bounding rectangle of actual values in ROI; the contour of the
# mask is larger than the actual contours in the original image
(x, y, w, h) = cv2.boundingRect(roi)
roi = roi[y:y+h, x:x+w]
cv2.imwrite('{}.png'.format(i), roi)
The two exported images look like this:
If you, instead, wanted to mark the regions of interest (ROIs) within the original image, pay attention to use the correct coordinates of the bounding boxes (refined coordinates are w.r.t. to the first cut ROI).
If your other images show more rotation, skew, ... you'd might need to re-draw the contours w.r.t. the original image, if neighbouring, small contours should be neglected completely.
----------------------------------------
System information
----------------------------------------
Platform: Windows-10-10.0.16299-SP0
Python: 3.9.1
NumPy: 1.20.1
OpenCV: 4.5.1
----------------------------------------

OpenCV: per color pixel count in irregularly shaped area?

Say I have a multicolored map of the United States, and I want to know how many pixels in a particular state (Nevada, for example) are purple, how many are green, and how many are white. Can I do this with OpenCV?
I've tried to tackle this by turning each state on an uncolored "basemap" into its own contour using cv2.drawContours, then overlaying the two images (This is where things start to feel wrong).
I know that I can then use the following:
Nevada = contours[21]
area = cv2.contourArea(Nevada)
print(area)
to print the total number of pixels in a given state/contour, but I have no idea whether a similar function exists that will show me the number of pixels of a certain color within that state/contour. Is there a way to do this? Any guidance would be much appreciated.
Here is one way to do that in Python/OpenCV.
Read the basemap image as grayscale and threshold
Read the map image
Get all the contours from the thresholded baseman image
Define the colors
Loop over the contours and select contours in a specified range of areas (Adjust the lower bound to get more state contours)
For each acceptable contour, draw it filled in white on a black image
Mask the map image to show only the given contour
Use numpy to sum all the colored pixels in the masked map image
Print the index and color counts
Optionally view each masked map region
Get the centroid of the contour
Draw the index number at the centroid on the map image
At the end, after the loop, save the labeled map image
Base Map:
Map:
import cv2
import numpy as np
# read basemap image as grayscale
basemap = cv2.imread('basemap.png', cv2.COLOR_BGR2GRAY)
# threshold basemap and make single channel
thresh = cv2.threshold(basemap, 200, 255, cv2.THRESH_BINARY)[1]
thresh = thresh[:,:,0]
# read map
map = cv2.imread('map.png')
# define colors
red = (255,0,255)
green = (125,196,147)
blue = (232,197,159)
orange = (102,102,224)
# get contours
contours = cv2.findContours(thresh, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
# print table header
print('{:^15}{:^15}{:^15}{:^15}{:^15}'.format("index", "red_count", "green_count", "blue_count", "orange_count"))
# initialize labeled map
map_labeled = map.copy()
# loop over index and corresponding contour (cntr)
for index, cntr in enumerate(contours):
# filter on area
area = cv2.contourArea(cntr)
if area > 1000 and area < 20000 :
# draw contours on black image
mask = np.zeros_like(basemap)
cv2.drawContours(mask, contours, index, (255,255,255), cv2.FILLED)
# copy map
map_masked = map.copy()
# do bitwise_and between copied map and mask for a given contour
map_masked = cv2.bitwise_and(map_masked, mask)
# get counts for given contour
red_count = np.sum(np.where((map_masked == red).all(axis=2)))
green_count = np.sum(np.where((map_masked == green).all(axis=2)))
blue_count = np.sum(np.where((map_masked == blue).all(axis=2)))
orange_count = np.sum(np.where((map_masked == orange).all(axis=2)))
# print index and counts
print('{:^15}{:^15}{:^15}{:^15}{:^15}'.format(index, red_count, green_count, blue_count, orange_count))
# get centroid of contour for label placement
M = cv2.moments(cntr)
cx = int(M["m10"] / M["m00"])
cy = int(M["m01"] / M["m00"])
# label map with index
map_labeled = cv2.putText(map_labeled, str(index), (cx,cy), cv2.FONT_HERSHEY_PLAIN, 0.75, (0,0,0))
# view each state region from map isolated by mask from contour
# remove the following 3 lines if you do not want to hit the space key for each contour
cv2.imshow("index", map_masked)
cv2.waitKey(0)
cv2.destroyAllWindows()
# save labeled map
cv2.imwrite('map_labeled.png', map_labeled)
Labeled Map:
Terminal Listing Output:

How to determine whether object is embossed or debossed using OpenCV?

Let's say I have image with embossed and debossed object like this
or
Is there a way to determine the above object is embossed and the below object is debossed using OpenCV? Preferably using C++, but Python is also fine. I couldn't find any good resource on the internet.
Here's an approach which takes advantage of the sunken and lifted contours of the embossed/debossed image. The main idea is:
Convert the image to grayscale
Perform a morphological transformation
Find outlines using Canny edge detection
Dilate canny image to merge individual contours into a single contour
Perform contour detection to find the ROI dimensions of top/bottom halves
Obtain ROI of top/bottom canny image
Count non-zero array elements for each half
Convert to grayscale and perform morphological transformation
Perform canny edge detection to find outlines. The key to determine if an object is embossed/debossed is to compare the canny edges. Here's the approach: We look at the object, if its upper half has more contour/lines/pixels than its lower half then it is debossed. Similarly, if the upper half has less pixels than its lower half then it is embossed.
Now that we have the canny edges, we dialte the image until all the contours connect so we obtain one single object.
We then perform contour detection to obtain the ROI of the objects
From here, we separate each object into top and bottom sections
Now that we have the ROI of the top and bottom sections, we crop the ROI in the canny image
With each half, we count non-zero array elements using cv2.countNonZero(). For the embossed object, we get this
('top', 1085)
('bottom', 1899)
For the debossed object, we get this
('top', 979)
('bottom', 468)
Therefore by comparing the values between the two halves, if the top half has less pixels than the bottom, it is embossed. If it has more, it is debossed.
import numpy as np
import cv2
original_image = cv2.imread("1.jpg")
image = original_image.copy()
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5,5))
morph = cv2.morphologyEx(gray, cv2.MORPH_OPEN, kernel)
canny = cv2.Canny(morph, 130, 255, 1)
# Dilate canny image so contours connect and form a single contour
dilate = cv2.dilate(canny, kernel, iterations=4)
cv2.imshow("morph", morph)
cv2.imshow("canny", canny)
cv2.imshow("dilate", dilate)
# Find contours in the image
cnts = cv2.findContours(dilate.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
contours = []
# For each image separate it into top/bottom halfs
for c in cnts:
# Obtain bounding rectangle for each contour
x,y,w,h = cv2.boundingRect(c)
# Draw bounding box rectangle
cv2.rectangle(original_image,(x,y),(x+w,y+h),(0,255,0),3)
# cv2.rectangle(original_image,(x,y),(x+w,y+h/2),(0,255,0),3) # top
# cv2.rectangle(original_image,(x,y+h/2),(x+w,y+h),(0,255,0),3) # bottom
top_half = ((x,y), (x+w, y+h/2))
bottom_half = ((x,y+h/2), (x+w, y+h))
# Collect top/bottom ROIs
contours.append((top_half, bottom_half))
for index, c in enumerate(contours):
top_half, bottom_half = c
top_x1,top_y1 = top_half[0]
top_x2,top_y2 = top_half[1]
bottom_x1,bottom_y1 = bottom_half[0]
bottom_x2,bottom_y2 = bottom_half[1]
# Grab ROI of top/bottom section from canny image
top_image = canny[top_y1:top_y2, top_x1:top_x2]
bottom_image = canny[bottom_y1:bottom_y2, bottom_x1:bottom_x2]
cv2.imshow('top_image', top_image)
cv2.imshow('bottom_image', bottom_image)
# Count non-zero array elements
top_pixels = cv2.countNonZero(top_image)
bottom_pixels = cv2.countNonZero(bottom_image)
print('top', top_pixels)
print('bottom', bottom_pixels)
cv2.imshow("detected", original_image)
print('contours detected: {}'.format(len(contours)))
cv2.waitKey(0)
One insight you could use is that an embossed object is usually brighter than a debossed object.
I would probably do an edge detection to find the "boss-edges" which should form a closed polygon, and compare the relative lightness value of the enclosed "bossment". Special care must be taken for objects with holes, e.g. the letter O, but it is do-able.
You can probably do more sophisticated processing if you know the light direction that is hitting the bossment. e.g. if you know the light is coming from top left, you can try only focusing on the top left edge pixels

Assign different colors to different biological cells in an image

I need to assign different colors to different biological cells in an image.
To be more specific, the image is only black and white (so a matrix of 0 and 255 only). The content of the cells (excluding boundaries) is represented by white color, whereas cell boundaries are represented by black color. Each cell is enclosed by some cell boundaries or image edges. I hope to assign different colors to different cells, such that I can immediately tell which cells I am currently at by simply looking at the value of its entry.
Edit: It is biological cells. I have found something similar online:
Source: http://brainiac2.mit.edu/isbi_challenge/
A simple approach comes to mind:
Threshold the image to binarize it, since the one you provided contains more than just 0s and 255s.
Find all the contours and organize them into a two-level hierarchy. This can be accomplished by calling cv2.threshold with the flag RETR_CCOMP:
At the top level, there are external boundaries of the components. At the second level, there are boundaries of the holes. If there is another contour inside a hole of a connected component, it is still put at the top level.
N.B.: "Components" are the white parts of image, "holes" are the black parts.
Iterate over contours. For each contour of a component (since such contours are placed at the top level of the hierarchy, they have no parent contour) draw the contour polygon filled with a random colour.
Sample Code:
import cv2
import numpy as np
img = cv2.imread('cells.png', cv2.IMREAD_GRAYSCALE)
thresh = cv2.threshold(img, thresh=128, maxval=255, type=cv2.THRESH_BINARY)[1]
_, contours, hierarchy = cv2.findContours(thresh, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
output = cv2.cvtColor(thresh, cv2.COLOR_GRAY2BGR)
for i, contour in enumerate(contours):
if hierarchy[0][i][3] == -1:
colour = cv2.randu(np.zeros(3, np.uint8), 0, 256)
cv2.drawContours(output, contours, i, colour.tolist(), -1)
cv2.imwrite('cells_colour.png', output)
Result:

Get the count of rectangles in an image

I have an image like the one below, and I want to determine the number of rectangles in the image. I do know how to do this if they were filled.
contours = cv2.findContours(image.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if imutils.is_cv2() else contours[1]
print len(contours)
But this does not work if the rectangle is empty.
I also do not know how to fill the rectangles in the image. I know how to fill the contours if they are drawn using OpenCV, but I do not know how to fill empty rectangles already present in the image.
Assuming you have tried shape detectors, line detections, etc and not succeeded here is another way of solving this problem.
If this is a grayscale PNG image, you can use segmentation by color to achieve this.
I would approach it like so:
count = 0
For each pixel in the image:
if color(pixel) == white /*255*/
count++
floodfill using this pixel as a seed pixel and target color as count
no_of_rectangles = count - 1 /* subtract 1 since the background will be colored too*/
This assumes the rectangles have continuous lines, else the floodfill will leak into other rectangles.
Filled or not should not make a difference if you find the outer contours (RETR_EXTERNAL). Following code will give you number 3.
canvas = np.zeros(img.shape, np.uint8)
img2gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
ret,thresh = cv2.threshold(img2gray,128,255,cv2.THRESH_BINARY_INV)
im2,contours,hierarchy = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
print(len(contours))
for cont in contours:
cv2.drawContours(canvas, cont, -1, (0, 255, 0), 3)
cv2.imshow('contours',canvas)
cv2.waitKey(30000)
cv2.destroyAllWindows()
Notice if you use RETR_TREE as the 2nd parameter in findContours(), you get all 6 contours, including the inner ones.
Obviously, this assumes that the image only contains rectangles and it doesn't distinguish different shapes.

Categories