I have the following code in Python to find contours in my image:
import cv2
im = cv2.imread('test.jpg')
imgray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(imgray, 127, 255, 0)
im2, contours, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
Now I want to copy the area inside the first contour to another image, but I can't find any tutorial or example code that shows how to do that.
Here's a fully working example. It's a bit overkill in that it outputs all the contours but I think you may find a way to tweak it to your liking. Also not sure what you mean by copying, so I'll assume you just want the contours outputted to a file.
We will start with an image like so (in this case you will notice I don't need to threshold the image). The script below can be broken down into 6 major steps:
Canny filter to find the edges
cv2.findContours to keep track of our contours, note that we only need outer contours hence the cv2.RETR_EXTERNAL flag.
cv2.drawContours draws the shapes of each contour to our image
Loop through all contours and put bounding boxes around them.
Use the x,y,w,h information of our boxes to help us crop every contour
Write the cropped image to a file.
import cv2
image = cv2.imread('images/blobs1.png')
edged = cv2.Canny(image, 175, 200)
contours, hierarchy = cv2.findContours(edged, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(image, contours, -1, (0,255,0), 3)
cv2.imshow("Show contour", image)
cv2.waitKey(0)
cv2.destroyAllWindows()
for i,c in enumerate(contours):
rect = cv2.boundingRect(c)
x,y,w,h = rect
box = cv2.rectangle(image, (x,y), (x+w,y+h), (0,0,255), 2)
cropped = image[y: y+h, x: x+w]
cv2.imshow("Show Boxes", cropped)
cv2.waitKey(0)
cv2.destroyAllWindows()
cv2.imwrite("blobby"+str(i)+".png", cropped)
cv2.imshow("Show Boxes", image)
cv2.waitKey(0)
cv2.destroyAllWindows()
Not familiar with cv2.findContours but I imagine that a contour is represented by an array of points with row/column (x/y values. If this is the case and the contour is a single pixel in width then there should be two points for every row - one each on the left and right extremes of the contour.
For each row in the contour
*select* all the points in the image that are between the two contour points for that row
save those points to a new array.
As #DanMaĆĄek points out, if the points in the contour array describe a simple shape with with only the ends, corners or breakpoints represented then you would need to fill in the gaps to use the method above.
Also if the contour shape is something like a star you would need to figure out a different method for determining if an image point is inside of the contour. The method I posted is a bit naive - but might be a good starting point. For a convoluted shape like a star there might be multiple points per row of the contour but it seems like the points would come in pairs and the points you are interested in would be between the pairs.
.
Related
I have some images with a black background and some text in the corner:
I'm trying to do a rectangular crop to make it look like:
The text on the sides as well as the window dimensions are of varying sizes. My code isn't cropping correctly, what am I doing wrong?
I've tried removing the text in the bottom right corner first and cropping, that doesn't work either.
def crop_cont(img):
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
_,thresh = cv2.threshold(gray,15,255,cv2.THRESH_BINARY)
_, contours, _= cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnt = contours[0]
x,y,w,h = cv2.boundingRect(cnt)
crop = img[y:y+h,x:x+w]
return crop
Your code is in general ok. The issue is that you are using contours[0]. You have to find the right contour (there are more than one). In this particular example, the right contour is the biggest one. Just iterate all found contours and find the one with the biggest area.
I'm trying to segment the football field. I have found the largest contour that represents the field area. I need to generate a binary image using that area.
I'm Following a research paper and followed all the steps including
Convert to HSV
Capture the H channel
Generate Histogram
Some processing (Not mentioning as irrelevant to the question)
Find the largest blob in the binary image
I have done it using Contours and I have the largest contour which represents the field area.
I need to use this specific contour to generate a new binary image which will contain only the area of this contour.
# Find Largest Blob
# mask is the processed binary image
# using that mask I find the contours and draw them on original
#image
contours, hierarchy = cv2.findContours(mask, cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_NONE)
largest_blob = max(contours, key=cv2.contourArea)
cv2.drawContours(image, largest_blob, -1, (0, 0, 255), 2)
I have done it using the cv2.fillPoly(noiseless_mask, [largest_blob], 255) function. Where noiseless_mask = np.zeros_like(mask)
You may do this, first find the bounding rectangle for those contours,
The bounding rectangle is indicated by the green box
x,y,w,h = cv2.boundingRect(cnt)
Then use these to crop the image
field = img[y:y+h, x:x+w, :]
Then you may apply binarization to the field object, more info can be found here
https://docs.opencv.org/3.1.0/dd/d49/tutorial_py_contour_features.html
Let's say I have image with embossed and debossed object like this
or
Is there a way to determine the above object is embossed and the below object is debossed using OpenCV? Preferably using C++, but Python is also fine. I couldn't find any good resource on the internet.
Here's an approach which takes advantage of the sunken and lifted contours of the embossed/debossed image. The main idea is:
Convert the image to grayscale
Perform a morphological transformation
Find outlines using Canny edge detection
Dilate canny image to merge individual contours into a single contour
Perform contour detection to find the ROI dimensions of top/bottom halves
Obtain ROI of top/bottom canny image
Count non-zero array elements for each half
Convert to grayscale and perform morphological transformation
Perform canny edge detection to find outlines. The key to determine if an object is embossed/debossed is to compare the canny edges. Here's the approach: We look at the object, if its upper half has more contour/lines/pixels than its lower half then it is debossed. Similarly, if the upper half has less pixels than its lower half then it is embossed.
Now that we have the canny edges, we dialte the image until all the contours connect so we obtain one single object.
We then perform contour detection to obtain the ROI of the objects
From here, we separate each object into top and bottom sections
Now that we have the ROI of the top and bottom sections, we crop the ROI in the canny image
With each half, we count non-zero array elements using cv2.countNonZero(). For the embossed object, we get this
('top', 1085)
('bottom', 1899)
For the debossed object, we get this
('top', 979)
('bottom', 468)
Therefore by comparing the values between the two halves, if the top half has less pixels than the bottom, it is embossed. If it has more, it is debossed.
import numpy as np
import cv2
original_image = cv2.imread("1.jpg")
image = original_image.copy()
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5,5))
morph = cv2.morphologyEx(gray, cv2.MORPH_OPEN, kernel)
canny = cv2.Canny(morph, 130, 255, 1)
# Dilate canny image so contours connect and form a single contour
dilate = cv2.dilate(canny, kernel, iterations=4)
cv2.imshow("morph", morph)
cv2.imshow("canny", canny)
cv2.imshow("dilate", dilate)
# Find contours in the image
cnts = cv2.findContours(dilate.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
contours = []
# For each image separate it into top/bottom halfs
for c in cnts:
# Obtain bounding rectangle for each contour
x,y,w,h = cv2.boundingRect(c)
# Draw bounding box rectangle
cv2.rectangle(original_image,(x,y),(x+w,y+h),(0,255,0),3)
# cv2.rectangle(original_image,(x,y),(x+w,y+h/2),(0,255,0),3) # top
# cv2.rectangle(original_image,(x,y+h/2),(x+w,y+h),(0,255,0),3) # bottom
top_half = ((x,y), (x+w, y+h/2))
bottom_half = ((x,y+h/2), (x+w, y+h))
# Collect top/bottom ROIs
contours.append((top_half, bottom_half))
for index, c in enumerate(contours):
top_half, bottom_half = c
top_x1,top_y1 = top_half[0]
top_x2,top_y2 = top_half[1]
bottom_x1,bottom_y1 = bottom_half[0]
bottom_x2,bottom_y2 = bottom_half[1]
# Grab ROI of top/bottom section from canny image
top_image = canny[top_y1:top_y2, top_x1:top_x2]
bottom_image = canny[bottom_y1:bottom_y2, bottom_x1:bottom_x2]
cv2.imshow('top_image', top_image)
cv2.imshow('bottom_image', bottom_image)
# Count non-zero array elements
top_pixels = cv2.countNonZero(top_image)
bottom_pixels = cv2.countNonZero(bottom_image)
print('top', top_pixels)
print('bottom', bottom_pixels)
cv2.imshow("detected", original_image)
print('contours detected: {}'.format(len(contours)))
cv2.waitKey(0)
One insight you could use is that an embossed object is usually brighter than a debossed object.
I would probably do an edge detection to find the "boss-edges" which should form a closed polygon, and compare the relative lightness value of the enclosed "bossment". Special care must be taken for objects with holes, e.g. the letter O, but it is do-able.
You can probably do more sophisticated processing if you know the light direction that is hitting the bossment. e.g. if you know the light is coming from top left, you can try only focusing on the top left edge pixels
I am using the OpenCV contour function in python. For example on an image like this:
contours, _ = cv2.findContours(img_expanded_padded, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
It works well except that it cuts off the corners on the inside of the contour as seen above. Are there any options that would leave this corner in?
Travelling along the contours and filling them in manually will be too computationally expensive. The above is only an example. This will be performed many times on images 5400x5400 or more...
I can find the edges with the code below, and have filled corners as a result but then I need to extract these as contours again.
# FIND ALL HORIZONTAL AND VERTICAL EDGES AND COMBINE THEM
edges_expanded_x = np.absolute(cv2.Sobel(img_expanded_padded,cv2.CV_64F, 1, 0, ksize=3))
edges_expanded_y = np.absolute(cv2.Sobel(img_expanded_padded,cv2.CV_64F, 0, 1, ksize=3))
edges_expanded = np.logical_or(edges_expanded_x, edges_expanded_y)
# GET RID OF DOUBLE EDGE THAT RESULTS FROM SOBEL FILTER
edges_expanded = np.multiply(img_expanded_padded,edges_expanded)
Are there any OpenCV settings or functions I can use to accomplish this?
EDIT:
I should clarify, that my goal is to have a single pixel continous contour. I need the contours and not an array of the entire image including the contours.
EDIT: The above images are zoomed into my test image. The actual pixels are as shown by the red grids in the images below.
There is no need to use cv2.Sobel you can simply draw contours with cv2.drawContours on a black background. The black background can be drawn with the help of np.zeros.
img = cv2.imread('contouring.png',0)
contours, _ = cv2.findContours(img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
bgr = np.zeros((img.shape[0], img.shape[1]), dtype= 'uint8')
cv2.drawContours(bgr, contours, -1, (255,255,255), 1)
If you want the contour lines to be thick then you use cv2.dilate for that. Then to prevent the cutting of corners cv2.bitwise_and can be used along with cv2.bitwise_not as shown below
bgr = cv2.dilate(bgr, np.ones((31, 31), np.uint8), iterations=1)
bgr = cv2.bitwise_and(bgr, cv2.bitwise_not(img))
This gives contours which are 15 pixels thick.
EDIT- The first image of thin contours is still cutting corners. To obtain single pixel contours which do not corners we can use a kernel size of 3*3.
img = cv2.imread('contouring.png',0)
contours, _ = cv2.findContours(img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
bgr = np.zeros((img.shape[0], img.shape[1]), dtype= 'uint8')
cv2.drawContours(bgr, contours, -1, (255,255,255), 1)
bgr = cv2.dilate(bgr, np.ones((3, 3), np.uint8), iterations=1)
bgr = cv2.bitwise_and(bgr, cv2.bitwise_not(img))
This gives us
I have checked it by using cv2.bitwise_and between bgr and img and I obtain a black image indicating that no white pixels are cutting corners.
I have an image like the one below, and I want to determine the number of rectangles in the image. I do know how to do this if they were filled.
contours = cv2.findContours(image.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if imutils.is_cv2() else contours[1]
print len(contours)
But this does not work if the rectangle is empty.
I also do not know how to fill the rectangles in the image. I know how to fill the contours if they are drawn using OpenCV, but I do not know how to fill empty rectangles already present in the image.
Assuming you have tried shape detectors, line detections, etc and not succeeded here is another way of solving this problem.
If this is a grayscale PNG image, you can use segmentation by color to achieve this.
I would approach it like so:
count = 0
For each pixel in the image:
if color(pixel) == white /*255*/
count++
floodfill using this pixel as a seed pixel and target color as count
no_of_rectangles = count - 1 /* subtract 1 since the background will be colored too*/
This assumes the rectangles have continuous lines, else the floodfill will leak into other rectangles.
Filled or not should not make a difference if you find the outer contours (RETR_EXTERNAL). Following code will give you number 3.
canvas = np.zeros(img.shape, np.uint8)
img2gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
ret,thresh = cv2.threshold(img2gray,128,255,cv2.THRESH_BINARY_INV)
im2,contours,hierarchy = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
print(len(contours))
for cont in contours:
cv2.drawContours(canvas, cont, -1, (0, 255, 0), 3)
cv2.imshow('contours',canvas)
cv2.waitKey(30000)
cv2.destroyAllWindows()
Notice if you use RETR_TREE as the 2nd parameter in findContours(), you get all 6 contours, including the inner ones.
Obviously, this assumes that the image only contains rectangles and it doesn't distinguish different shapes.