I am working on preprocessing images of digits. Each image contains a single number and some unwanted contours .
This noise or contours can be in the form of lines or small dots. My task is to remove these contours. So I decided to use the following algorithm
Find all contours
Sort the contours according to the area in descending order
Iterate through contours from index 2 to last
create a boundingRect and fill that part of the image with 255
cv2.findContours(image, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
contours = sorted(contours, key = cv2.contourArea, reverse = True)
for c in contours[2:]:
(x, y, w, h) = cv2.boundingRect(c)
image[y:y + h, x:x + w] = 255
Now the problem is that in case of first image boundingRect is returning (14,14,150,150) which covers the digit too. Now my question is that is there any better alternative to boundingRect so that only contour area can be replaced.
Output images were following:
Original image files are following
I'm able to achieve the desired output by these steps:
Inverse the color of the image (Or change the retrieving method to RETR_INTERNAL while finding contours). White pixels have value of 1 and blacks have the value of 0, thus black contours can not be detected by RETR_EXTERNAL operation.
Sort te contours according to area as you did
Fill the contours starting from the second largest contour with white.
Code:
# Images I copied from here were not single channel, so convert to single channel to be able to find contours.
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Copy the original image to draw on it later on
img_copy = img.copy()
# Invert the color if using RETR_EXTERNAL
img = cv2.bitwise_not(img)
# Note that findContours function returns hierarchy as well as the contours in OpenCV4.x, in OpenCV3.x it used to return 3 values.
contours, hierarchy = cv2.findContours(img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# Sort the contours descending according to their area
contours = sorted(contours, key =lambda x: cv2.contourArea(x), reverse = True)
# Fill the detected contours starting from second largest
for c in contours[1:]:
print(c)
img_copy = cv2.fillPoly(img_copy, [c], color=(255,255,255))
Output: (Black bar between windows is my background)
Related
I'm trying to segment the football field. I have found the largest contour that represents the field area. I need to generate a binary image using that area.
I'm Following a research paper and followed all the steps including
Convert to HSV
Capture the H channel
Generate Histogram
Some processing (Not mentioning as irrelevant to the question)
Find the largest blob in the binary image
I have done it using Contours and I have the largest contour which represents the field area.
I need to use this specific contour to generate a new binary image which will contain only the area of this contour.
# Find Largest Blob
# mask is the processed binary image
# using that mask I find the contours and draw them on original
#image
contours, hierarchy = cv2.findContours(mask, cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_NONE)
largest_blob = max(contours, key=cv2.contourArea)
cv2.drawContours(image, largest_blob, -1, (0, 0, 255), 2)
I have done it using the cv2.fillPoly(noiseless_mask, [largest_blob], 255) function. Where noiseless_mask = np.zeros_like(mask)
You may do this, first find the bounding rectangle for those contours,
The bounding rectangle is indicated by the green box
x,y,w,h = cv2.boundingRect(cnt)
Then use these to crop the image
field = img[y:y+h, x:x+w, :]
Then you may apply binarization to the field object, more info can be found here
https://docs.opencv.org/3.1.0/dd/d49/tutorial_py_contour_features.html
I would like to find the contour of the rectangular photograph inside of the object. I've tried using the corner detection feature of OpenCV, but to no avail. I also tried to find all the contours using findContours, and filter out the contours with more (or less) than 4 edges, but this also didn't lead anywhere.
I have a sample scan here.
I have a solution for you, but it involves a lot of steps. Also, it may not generalize that well. It does work pretty good for your image though.
First a grayscale and threshold is made and findContours is used to create a mask of the paper area. That mask is inverted and combined with the original image, which makes the black edges white. A new grayscale and threshold is made on the resulting image, which is then inverted so findContours can find the dark pixels of the photo. A rotated box around the largest contours is selected, which is the area you seek.
I added a little extra, which you may not need, but could be convenient: perspectivewarp is applied to the box, so the area you want is made into a straight rectangle.
There is quite a lot happening, so I advise you to take some time a look at the intermediate steps, to understand what happens.
Result:
Code:
import numpy as np
import cv2
# load image
image = cv2.imread('photo.jpg')
# resize to easily view on screen, remove for final processing
image = cv2.resize(image,None,fx=0.2, fy=0.2, interpolation = cv2.INTER_CUBIC)
### remove outer black edge
# create grayscale
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# perform threshold
retr , mask = cv2.threshold(gray_image, 190, 255, cv2.THRESH_BINARY)
# remove noise
kernel = np.ones((5,5),np.uint8)
mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel)
# create emtpy mask
mask_2 = np.zeros(image.shape[:3], dtype=image.dtype)
# find contours
ret, contours, hier = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# draw the found shapes (white, filled in ) on the empty mask
for cnt in contours:
cv2.drawContours(mask_2, [cnt], 0, (255,255,255), -1)
# invert mask and combine with original image - this makes the black outer edge white
mask_inv_2 = cv2.bitwise_not(mask_2)
tmp = cv2.bitwise_or(image, mask_inv_2)
### Select photo - not inner edge
# create grayscale
gray_image2 = cv2.cvtColor(tmp, cv2.COLOR_BGR2GRAY)
# perform threshold
retr, mask3 = cv2.threshold(gray_image2, 190, 255, cv2.THRESH_BINARY)
# remove noise
maskX = cv2.morphologyEx(mask3, cv2.MORPH_CLOSE, kernel)
# invert mask, so photo area can be found with findcontours
maskX = cv2.bitwise_not(maskX)
# findcontours
ret, contours2, hier = cv2.findContours(maskX, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# select the largest contour
largest_area = 0
for cnt in contours2:
if cv2.contourArea(cnt) > largest_area:
cont = cnt
largest_area = cv2.contourArea(cnt)
# find the rectangle (and the cornerpoints of that rectangle) that surrounds the contours / photo
rect = cv2.minAreaRect(cont)
box = cv2.boxPoints(rect)
box = np.int0(box)
print(rect)
#### Warp image to square
# assign cornerpoints of the region of interest
pts1 = np.float32([box[1],box[0],box[2],box[3]])
# provide new coordinates of cornerpoints
pts2 = np.float32([[0,0],[0,450],[630,0],[630,450]])
# determine and apply transformationmatrix
M = cv2.getPerspectiveTransform(pts1,pts2)
result = cv2.warpPerspective(image,M,(630,450))
#draw rectangle on original image
cv2.drawContours(image, [box], 0, (255,0,0), 2)
#show image
cv2.imshow("Result", result)
cv2.imshow("Image", image)
cv2.waitKey(0)
cv2.destroyAllWindows()
I have an image like the one below, and I want to determine the number of rectangles in the image. I do know how to do this if they were filled.
contours = cv2.findContours(image.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if imutils.is_cv2() else contours[1]
print len(contours)
But this does not work if the rectangle is empty.
I also do not know how to fill the rectangles in the image. I know how to fill the contours if they are drawn using OpenCV, but I do not know how to fill empty rectangles already present in the image.
Assuming you have tried shape detectors, line detections, etc and not succeeded here is another way of solving this problem.
If this is a grayscale PNG image, you can use segmentation by color to achieve this.
I would approach it like so:
count = 0
For each pixel in the image:
if color(pixel) == white /*255*/
count++
floodfill using this pixel as a seed pixel and target color as count
no_of_rectangles = count - 1 /* subtract 1 since the background will be colored too*/
This assumes the rectangles have continuous lines, else the floodfill will leak into other rectangles.
Filled or not should not make a difference if you find the outer contours (RETR_EXTERNAL). Following code will give you number 3.
canvas = np.zeros(img.shape, np.uint8)
img2gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
ret,thresh = cv2.threshold(img2gray,128,255,cv2.THRESH_BINARY_INV)
im2,contours,hierarchy = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
print(len(contours))
for cont in contours:
cv2.drawContours(canvas, cont, -1, (0, 255, 0), 3)
cv2.imshow('contours',canvas)
cv2.waitKey(30000)
cv2.destroyAllWindows()
Notice if you use RETR_TREE as the 2nd parameter in findContours(), you get all 6 contours, including the inner ones.
Obviously, this assumes that the image only contains rectangles and it doesn't distinguish different shapes.
I have the following code in Python to find contours in my image:
import cv2
im = cv2.imread('test.jpg')
imgray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(imgray, 127, 255, 0)
im2, contours, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
Now I want to copy the area inside the first contour to another image, but I can't find any tutorial or example code that shows how to do that.
Here's a fully working example. It's a bit overkill in that it outputs all the contours but I think you may find a way to tweak it to your liking. Also not sure what you mean by copying, so I'll assume you just want the contours outputted to a file.
We will start with an image like so (in this case you will notice I don't need to threshold the image). The script below can be broken down into 6 major steps:
Canny filter to find the edges
cv2.findContours to keep track of our contours, note that we only need outer contours hence the cv2.RETR_EXTERNAL flag.
cv2.drawContours draws the shapes of each contour to our image
Loop through all contours and put bounding boxes around them.
Use the x,y,w,h information of our boxes to help us crop every contour
Write the cropped image to a file.
import cv2
image = cv2.imread('images/blobs1.png')
edged = cv2.Canny(image, 175, 200)
contours, hierarchy = cv2.findContours(edged, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(image, contours, -1, (0,255,0), 3)
cv2.imshow("Show contour", image)
cv2.waitKey(0)
cv2.destroyAllWindows()
for i,c in enumerate(contours):
rect = cv2.boundingRect(c)
x,y,w,h = rect
box = cv2.rectangle(image, (x,y), (x+w,y+h), (0,0,255), 2)
cropped = image[y: y+h, x: x+w]
cv2.imshow("Show Boxes", cropped)
cv2.waitKey(0)
cv2.destroyAllWindows()
cv2.imwrite("blobby"+str(i)+".png", cropped)
cv2.imshow("Show Boxes", image)
cv2.waitKey(0)
cv2.destroyAllWindows()
Not familiar with cv2.findContours but I imagine that a contour is represented by an array of points with row/column (x/y values. If this is the case and the contour is a single pixel in width then there should be two points for every row - one each on the left and right extremes of the contour.
For each row in the contour
*select* all the points in the image that are between the two contour points for that row
save those points to a new array.
As #DanMaĆĄek points out, if the points in the contour array describe a simple shape with with only the ends, corners or breakpoints represented then you would need to fill in the gaps to use the method above.
Also if the contour shape is something like a star you would need to figure out a different method for determining if an image point is inside of the contour. The method I posted is a bit naive - but might be a good starting point. For a convoluted shape like a star there might be multiple points per row of the contour but it seems like the points would come in pairs and the points you are interested in would be between the pairs.
.
I am working on Retinal fundus images.The image consists of a circular retina on a black background. With OpenCV, I have managed to get a contour which surrounds the whole circular Retina. What I need is to crop out the circular retina from the black background.
It is unclear in your question whether you want to actually crop out the information that is defined within the contour or mask out the information that isn't relevant to the contour chosen. I'll explore what to do in both situations.
Masking out the information
Assuming you ran cv2.findContours on your image, you will have received a structure that lists all of the contours available in your image. I'm also assuming that you know the index of the contour that was used to surround the object you want. Assuming this is stored in idx, first use cv2.drawContours to draw a filled version of this contour onto a blank image, then use this image to index into your image to extract out the object. This logic masks out any irrelevant information and only retain what is important - which is defined within the contour you have selected. The code to do this would look something like the following, assuming your image is a grayscale image stored in img:
import numpy as np
import cv2
img = cv2.imread('...', 0) # Read in your image
# contours, _ = cv2.findContours(...) # Your call to find the contours using OpenCV 2.4.x
_, contours, _ = cv2.findContours(...) # Your call to find the contours
idx = ... # The index of the contour that surrounds your object
mask = np.zeros_like(img) # Create mask where white is what we want, black otherwise
cv2.drawContours(mask, contours, idx, 255, -1) # Draw filled contour in mask
out = np.zeros_like(img) # Extract out the object and place into output image
out[mask == 255] = img[mask == 255]
# Show the output image
cv2.imshow('Output', out)
cv2.waitKey(0)
cv2.destroyAllWindows()
If you actually want to crop...
If you want to crop the image, you need to define the minimum spanning bounding box of the area defined by the contour. You can find the top left and lower right corner of the bounding box, then use indexing to crop out what you need. The code will be the same as before, but there will be an additional cropping step:
import numpy as np
import cv2
img = cv2.imread('...', 0) # Read in your image
# contours, _ = cv2.findContours(...) # Your call to find the contours using OpenCV 2.4.x
_, contours, _ = cv2.findContours(...) # Your call to find the contours
idx = ... # The index of the contour that surrounds your object
mask = np.zeros_like(img) # Create mask where white is what we want, black otherwise
cv2.drawContours(mask, contours, idx, 255, -1) # Draw filled contour in mask
out = np.zeros_like(img) # Extract out the object and place into output image
out[mask == 255] = img[mask == 255]
# Now crop
(y, x) = np.where(mask == 255)
(topy, topx) = (np.min(y), np.min(x))
(bottomy, bottomx) = (np.max(y), np.max(x))
out = out[topy:bottomy+1, topx:bottomx+1]
# Show the output image
cv2.imshow('Output', out)
cv2.waitKey(0)
cv2.destroyAllWindows()
The cropping code works such that when we define the mask to extract out the area defined by the contour, we additionally find the smallest horizontal and vertical coordinates which define the top left corner of the contour. We similarly find the largest horizontal and vertical coordinates that define the bottom left corner of the contour. We then use indexing with these coordinates to crop what we actually need. Note that this performs cropping on the masked image - that is the image that removes everything but the information contained within the largest contour.
Note with OpenCV 3.x
It should be noted that the above code assumes you are using OpenCV 2.4.x. Take note that in OpenCV 3.x, the definition of cv2.findContours has changed. Specifically, the output is a three element tuple output where the first image is the source image, while the other two parameters are the same as in OpenCV 2.4.x. Therefore, simply change the cv2.findContours statement in the above code to ignore the first output:
_, contours, _ = cv2.findContours(...) # Your call to find contours
Here's another approach to crop out a rectangular ROI. The main idea is to find the edges of the retina using Canny edge detection, find contours, and then extract the ROI using Numpy slicing. Assuming you have an input image like this:
Extracted ROI
import cv2
# Load image, convert to grayscale, and find edges
image = cv2.imread('1.jpg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_OTSU + cv2.THRESH_BINARY)[1]
# Find contour and sort by contour area
cnts = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
cnts = sorted(cnts, key=cv2.contourArea, reverse=True)
# Find bounding box and extract ROI
for c in cnts:
x,y,w,h = cv2.boundingRect(c)
ROI = image[y:y+h, x:x+w]
break
cv2.imshow('ROI',ROI)
cv2.imwrite('ROI.png',ROI)
cv2.waitKey()
This is a pretty simple way. Mask the image with transparency.
Read the image
Make a grayscale version.
Otsu Threshold
Apply morphology open and close to thresholded image as a mask
Put the mask into the alpha channel of the input
Save the output
Input
Code
import cv2
import numpy as np
# load image as grayscale
img = cv2.imread('retina.jpeg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# threshold input image using otsu thresholding as mask and refine with morphology
ret, mask = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
kernel = np.ones((9,9), np.uint8)
mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel)
mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel)
# put mask into alpha channel of result
result = img.copy()
result = cv2.cvtColor(result, cv2.COLOR_BGR2BGRA)
result[:, :, 3] = mask
# save resulting masked image
cv2.imwrite('retina_masked.png', result)
Output