Generate binary image of individual contours - python

I'm trying to segment the football field. I have found the largest contour that represents the field area. I need to generate a binary image using that area.
I'm Following a research paper and followed all the steps including
Convert to HSV
Capture the H channel
Generate Histogram
Some processing (Not mentioning as irrelevant to the question)
Find the largest blob in the binary image
I have done it using Contours and I have the largest contour which represents the field area.
I need to use this specific contour to generate a new binary image which will contain only the area of this contour.
# Find Largest Blob
# mask is the processed binary image
# using that mask I find the contours and draw them on original
#image
contours, hierarchy = cv2.findContours(mask, cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_NONE)
largest_blob = max(contours, key=cv2.contourArea)
cv2.drawContours(image, largest_blob, -1, (0, 0, 255), 2)

I have done it using the cv2.fillPoly(noiseless_mask, [largest_blob], 255) function. Where noiseless_mask = np.zeros_like(mask)

You may do this, first find the bounding rectangle for those contours,
The bounding rectangle is indicated by the green box
x,y,w,h = cv2.boundingRect(cnt)
Then use these to crop the image
field = img[y:y+h, x:x+w, :]
Then you may apply binarization to the field object, more info can be found here
https://docs.opencv.org/3.1.0/dd/d49/tutorial_py_contour_features.html

Related

cv2.boundingRect creating issue

I am working on preprocessing images of digits. Each image contains a single number and some unwanted contours .
This noise or contours can be in the form of lines or small dots. My task is to remove these contours. So I decided to use the following algorithm
Find all contours
Sort the contours according to the area in descending order
Iterate through contours from index 2 to last
create a boundingRect and fill that part of the image with 255
cv2.findContours(image, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
contours = sorted(contours, key = cv2.contourArea, reverse = True)
for c in contours[2:]:
(x, y, w, h) = cv2.boundingRect(c)
image[y:y + h, x:x + w] = 255
Now the problem is that in case of first image boundingRect is returning (14,14,150,150) which covers the digit too. Now my question is that is there any better alternative to boundingRect so that only contour area can be replaced.
Output images were following:
Original image files are following
I'm able to achieve the desired output by these steps:
Inverse the color of the image (Or change the retrieving method to RETR_INTERNAL while finding contours). White pixels have value of 1 and blacks have the value of 0, thus black contours can not be detected by RETR_EXTERNAL operation.
Sort te contours according to area as you did
Fill the contours starting from the second largest contour with white.
Code:
# Images I copied from here were not single channel, so convert to single channel to be able to find contours.
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Copy the original image to draw on it later on
img_copy = img.copy()
# Invert the color if using RETR_EXTERNAL
img = cv2.bitwise_not(img)
# Note that findContours function returns hierarchy as well as the contours in OpenCV4.x, in OpenCV3.x it used to return 3 values.
contours, hierarchy = cv2.findContours(img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# Sort the contours descending according to their area
contours = sorted(contours, key =lambda x: cv2.contourArea(x), reverse = True)
# Fill the detected contours starting from second largest
for c in contours[1:]:
print(c)
img_copy = cv2.fillPoly(img_copy, [c], color=(255,255,255))
Output: (Black bar between windows is my background)

How to detect columns of rectangles in mask image?

I have found contours of some rectangles in an image and created a mask as shown below. What I am trying to do is finding those two columns of rectangles as highlighted in the image.
The source image:
Columns highlighted:
Desired output:
I'm not sure if one can achieve that with simple algorithm. If those inline-vertical rectangle does not change their position, You could just define (hardcoded) ROI (region of interest) for them, based on pixel coordinates.
If not using any machine learning to solve this, defining ROI is Your best option.
Feel free to ask, if needed.
Assuming the desired columns have significantly more contours than the other columns – as shown in the given example image – simple dilation with some vertical structuring element might be sufficient to find those columns:
Load image as grayscale, get rid of JPG artifacts.
Dilation with "small" vertical structuring element to combine all contours within a column.
Opening with "large" vertical structuring element to neglect all smaller contours. The two columns in question will now have quite large contours.
For safety reasons: Again, dilation with "small" vertical structuring element. Since the columns are slightly rotated, skewed, ... single pixels might've been falsely removed by the opening.
Find remaining contours.
For each contour: Get the bounding boxes in two steps, since we have to refine the bounding box by using the original image (because of step 4).
That'd be the full Python code:
import cv2
import numpy as np
# Read image, get rid of JPG artifacts
img = cv2.imread('gXylF.jpg', cv2.IMREAD_GRAYSCALE)
img = cv2.threshold(img, 128, 255, cv2.THRESH_BINARY)[1]
# Dilating using vertical structuring element to combine contours
mask = cv2.dilate(img, kernel=np.ones((21, 1)))
# Opening using vertical structuring element to neglect small contours
mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel=np.ones((201, 1)))
# Dilating using rectangular structuring element; for safety reasons:
# Single pixels might've been falsely removed by the opening
mask = cv2.dilate(mask, kernel=np.ones((11, 11)))
# Find remaining contours w.r.t. the OpenCV version
cnts = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
# Iterate remaining contours
for i, cnt in enumerate(cnts):
# Get bounding rectangle of contour, and region of interest (ROI)
(x, y, w, h) = cv2.boundingRect(cnt)
roi = img[y:y+h, x:x+w]
# Get bounding rectangle of actual values in ROI; the contour of the
# mask is larger than the actual contours in the original image
(x, y, w, h) = cv2.boundingRect(roi)
roi = roi[y:y+h, x:x+w]
cv2.imwrite('{}.png'.format(i), roi)
The two exported images look like this:
If you, instead, wanted to mark the regions of interest (ROIs) within the original image, pay attention to use the correct coordinates of the bounding boxes (refined coordinates are w.r.t. to the first cut ROI).
If your other images show more rotation, skew, ... you'd might need to re-draw the contours w.r.t. the original image, if neighbouring, small contours should be neglected completely.
----------------------------------------
System information
----------------------------------------
Platform: Windows-10-10.0.16299-SP0
Python: 3.9.1
NumPy: 1.20.1
OpenCV: 4.5.1
----------------------------------------

How to determine whether object is embossed or debossed using OpenCV?

Let's say I have image with embossed and debossed object like this
or
Is there a way to determine the above object is embossed and the below object is debossed using OpenCV? Preferably using C++, but Python is also fine. I couldn't find any good resource on the internet.
Here's an approach which takes advantage of the sunken and lifted contours of the embossed/debossed image. The main idea is:
Convert the image to grayscale
Perform a morphological transformation
Find outlines using Canny edge detection
Dilate canny image to merge individual contours into a single contour
Perform contour detection to find the ROI dimensions of top/bottom halves
Obtain ROI of top/bottom canny image
Count non-zero array elements for each half
Convert to grayscale and perform morphological transformation
Perform canny edge detection to find outlines. The key to determine if an object is embossed/debossed is to compare the canny edges. Here's the approach: We look at the object, if its upper half has more contour/lines/pixels than its lower half then it is debossed. Similarly, if the upper half has less pixels than its lower half then it is embossed.
Now that we have the canny edges, we dialte the image until all the contours connect so we obtain one single object.
We then perform contour detection to obtain the ROI of the objects
From here, we separate each object into top and bottom sections
Now that we have the ROI of the top and bottom sections, we crop the ROI in the canny image
With each half, we count non-zero array elements using cv2.countNonZero(). For the embossed object, we get this
('top', 1085)
('bottom', 1899)
For the debossed object, we get this
('top', 979)
('bottom', 468)
Therefore by comparing the values between the two halves, if the top half has less pixels than the bottom, it is embossed. If it has more, it is debossed.
import numpy as np
import cv2
original_image = cv2.imread("1.jpg")
image = original_image.copy()
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5,5))
morph = cv2.morphologyEx(gray, cv2.MORPH_OPEN, kernel)
canny = cv2.Canny(morph, 130, 255, 1)
# Dilate canny image so contours connect and form a single contour
dilate = cv2.dilate(canny, kernel, iterations=4)
cv2.imshow("morph", morph)
cv2.imshow("canny", canny)
cv2.imshow("dilate", dilate)
# Find contours in the image
cnts = cv2.findContours(dilate.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
contours = []
# For each image separate it into top/bottom halfs
for c in cnts:
# Obtain bounding rectangle for each contour
x,y,w,h = cv2.boundingRect(c)
# Draw bounding box rectangle
cv2.rectangle(original_image,(x,y),(x+w,y+h),(0,255,0),3)
# cv2.rectangle(original_image,(x,y),(x+w,y+h/2),(0,255,0),3) # top
# cv2.rectangle(original_image,(x,y+h/2),(x+w,y+h),(0,255,0),3) # bottom
top_half = ((x,y), (x+w, y+h/2))
bottom_half = ((x,y+h/2), (x+w, y+h))
# Collect top/bottom ROIs
contours.append((top_half, bottom_half))
for index, c in enumerate(contours):
top_half, bottom_half = c
top_x1,top_y1 = top_half[0]
top_x2,top_y2 = top_half[1]
bottom_x1,bottom_y1 = bottom_half[0]
bottom_x2,bottom_y2 = bottom_half[1]
# Grab ROI of top/bottom section from canny image
top_image = canny[top_y1:top_y2, top_x1:top_x2]
bottom_image = canny[bottom_y1:bottom_y2, bottom_x1:bottom_x2]
cv2.imshow('top_image', top_image)
cv2.imshow('bottom_image', bottom_image)
# Count non-zero array elements
top_pixels = cv2.countNonZero(top_image)
bottom_pixels = cv2.countNonZero(bottom_image)
print('top', top_pixels)
print('bottom', bottom_pixels)
cv2.imshow("detected", original_image)
print('contours detected: {}'.format(len(contours)))
cv2.waitKey(0)
One insight you could use is that an embossed object is usually brighter than a debossed object.
I would probably do an edge detection to find the "boss-edges" which should form a closed polygon, and compare the relative lightness value of the enclosed "bossment". Special care must be taken for objects with holes, e.g. the letter O, but it is do-able.
You can probably do more sophisticated processing if you know the light direction that is hitting the bossment. e.g. if you know the light is coming from top left, you can try only focusing on the top left edge pixels

Get the count of rectangles in an image

I have an image like the one below, and I want to determine the number of rectangles in the image. I do know how to do this if they were filled.
contours = cv2.findContours(image.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if imutils.is_cv2() else contours[1]
print len(contours)
But this does not work if the rectangle is empty.
I also do not know how to fill the rectangles in the image. I know how to fill the contours if they are drawn using OpenCV, but I do not know how to fill empty rectangles already present in the image.
Assuming you have tried shape detectors, line detections, etc and not succeeded here is another way of solving this problem.
If this is a grayscale PNG image, you can use segmentation by color to achieve this.
I would approach it like so:
count = 0
For each pixel in the image:
if color(pixel) == white /*255*/
count++
floodfill using this pixel as a seed pixel and target color as count
no_of_rectangles = count - 1 /* subtract 1 since the background will be colored too*/
This assumes the rectangles have continuous lines, else the floodfill will leak into other rectangles.
Filled or not should not make a difference if you find the outer contours (RETR_EXTERNAL). Following code will give you number 3.
canvas = np.zeros(img.shape, np.uint8)
img2gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
ret,thresh = cv2.threshold(img2gray,128,255,cv2.THRESH_BINARY_INV)
im2,contours,hierarchy = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
print(len(contours))
for cont in contours:
cv2.drawContours(canvas, cont, -1, (0, 255, 0), 3)
cv2.imshow('contours',canvas)
cv2.waitKey(30000)
cv2.destroyAllWindows()
Notice if you use RETR_TREE as the 2nd parameter in findContours(), you get all 6 contours, including the inner ones.
Obviously, this assumes that the image only contains rectangles and it doesn't distinguish different shapes.

Copy area inside contours to another image

I have the following code in Python to find contours in my image:
import cv2
im = cv2.imread('test.jpg')
imgray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(imgray, 127, 255, 0)
im2, contours, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
Now I want to copy the area inside the first contour to another image, but I can't find any tutorial or example code that shows how to do that.
Here's a fully working example. It's a bit overkill in that it outputs all the contours but I think you may find a way to tweak it to your liking. Also not sure what you mean by copying, so I'll assume you just want the contours outputted to a file.
We will start with an image like so (in this case you will notice I don't need to threshold the image). The script below can be broken down into 6 major steps:
Canny filter to find the edges
cv2.findContours to keep track of our contours, note that we only need outer contours hence the cv2.RETR_EXTERNAL flag.
cv2.drawContours draws the shapes of each contour to our image
Loop through all contours and put bounding boxes around them.
Use the x,y,w,h information of our boxes to help us crop every contour
Write the cropped image to a file.
import cv2
image = cv2.imread('images/blobs1.png')
edged = cv2.Canny(image, 175, 200)
contours, hierarchy = cv2.findContours(edged, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(image, contours, -1, (0,255,0), 3)
cv2.imshow("Show contour", image)
cv2.waitKey(0)
cv2.destroyAllWindows()
for i,c in enumerate(contours):
rect = cv2.boundingRect(c)
x,y,w,h = rect
box = cv2.rectangle(image, (x,y), (x+w,y+h), (0,0,255), 2)
cropped = image[y: y+h, x: x+w]
cv2.imshow("Show Boxes", cropped)
cv2.waitKey(0)
cv2.destroyAllWindows()
cv2.imwrite("blobby"+str(i)+".png", cropped)
cv2.imshow("Show Boxes", image)
cv2.waitKey(0)
cv2.destroyAllWindows()
Not familiar with cv2.findContours but I imagine that a contour is represented by an array of points with row/column (x/y values. If this is the case and the contour is a single pixel in width then there should be two points for every row - one each on the left and right extremes of the contour.
For each row in the contour
*select* all the points in the image that are between the two contour points for that row
save those points to a new array.
As #DanMašek points out, if the points in the contour array describe a simple shape with with only the ends, corners or breakpoints represented then you would need to fill in the gaps to use the method above.
Also if the contour shape is something like a star you would need to figure out a different method for determining if an image point is inside of the contour. The method I posted is a bit naive - but might be a good starting point. For a convoluted shape like a star there might be multiple points per row of the contour but it seems like the points would come in pairs and the points you are interested in would be between the pairs.
.

Categories